Dec 03 10:53:07 crc systemd[1]: Starting Kubernetes Kubelet... Dec 03 10:53:08 crc restorecon[4702]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 10:53:08 crc restorecon[4702]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 10:53:08 crc restorecon[4702]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 03 10:53:09 crc kubenswrapper[4756]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 10:53:09 crc kubenswrapper[4756]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 10:53:09 crc kubenswrapper[4756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 10:53:09 crc kubenswrapper[4756]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 10:53:09 crc kubenswrapper[4756]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 10:53:09 crc kubenswrapper[4756]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.077636 4756 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080684 4756 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080704 4756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080709 4756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080713 4756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080718 4756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080723 4756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080728 4756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080733 4756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080738 4756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080743 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080750 4756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080755 4756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080762 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080774 4756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080778 4756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080783 4756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080786 4756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080790 4756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080794 4756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080798 4756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080802 4756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080806 4756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080809 4756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080813 4756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080817 4756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080821 4756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080824 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080829 4756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080832 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080836 4756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080840 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080845 4756 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080849 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080853 4756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080857 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080861 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080871 4756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080874 4756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080879 4756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080882 4756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080886 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080890 4756 feature_gate.go:330] unrecognized feature gate: Example Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080894 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080898 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080904 4756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080908 4756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080912 4756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080916 4756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080920 4756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080923 4756 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080927 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080930 4756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080935 4756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080940 4756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080944 4756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080965 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080969 4756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080973 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080976 4756 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080981 4756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080984 4756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080988 4756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080992 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.080997 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.081001 4756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.081004 4756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.081008 4756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.081011 4756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.081015 4756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.081020 4756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.081024 4756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081276 4756 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081288 4756 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081301 4756 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081308 4756 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081315 4756 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081320 4756 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081326 4756 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081332 4756 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081336 4756 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081341 4756 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081346 4756 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081351 4756 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081355 4756 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081360 4756 flags.go:64] FLAG: --cgroup-root="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081364 4756 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081369 4756 flags.go:64] FLAG: --client-ca-file="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081373 4756 flags.go:64] FLAG: --cloud-config="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081377 4756 flags.go:64] FLAG: --cloud-provider="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081381 4756 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081390 4756 flags.go:64] FLAG: --cluster-domain="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081394 4756 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081399 4756 flags.go:64] FLAG: --config-dir="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081403 4756 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081408 4756 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081414 4756 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081419 4756 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081423 4756 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081434 4756 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081438 4756 flags.go:64] FLAG: --contention-profiling="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081442 4756 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081446 4756 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081450 4756 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081455 4756 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081460 4756 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081464 4756 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081468 4756 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081473 4756 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081479 4756 flags.go:64] FLAG: --enable-server="true" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081484 4756 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081495 4756 flags.go:64] FLAG: --event-burst="100" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081500 4756 flags.go:64] FLAG: --event-qps="50" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081505 4756 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081510 4756 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081515 4756 flags.go:64] FLAG: --eviction-hard="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081520 4756 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081524 4756 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081528 4756 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081532 4756 flags.go:64] FLAG: --eviction-soft="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081536 4756 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081540 4756 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081545 4756 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081549 4756 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081553 4756 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081557 4756 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081561 4756 flags.go:64] FLAG: --feature-gates="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081566 4756 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081570 4756 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081574 4756 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081578 4756 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081582 4756 flags.go:64] FLAG: --healthz-port="10248" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081587 4756 flags.go:64] FLAG: --help="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081591 4756 flags.go:64] FLAG: --hostname-override="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081595 4756 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081605 4756 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081609 4756 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081614 4756 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081618 4756 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081621 4756 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081626 4756 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081631 4756 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081635 4756 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081639 4756 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081643 4756 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081648 4756 flags.go:64] FLAG: --kube-reserved="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081652 4756 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081655 4756 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081660 4756 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081664 4756 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081668 4756 flags.go:64] FLAG: --lock-file="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081673 4756 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081677 4756 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081682 4756 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081688 4756 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081692 4756 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081696 4756 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081700 4756 flags.go:64] FLAG: --logging-format="text" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081705 4756 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081710 4756 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081714 4756 flags.go:64] FLAG: --manifest-url="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081718 4756 flags.go:64] FLAG: --manifest-url-header="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081724 4756 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081728 4756 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081734 4756 flags.go:64] FLAG: --max-pods="110" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081739 4756 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081743 4756 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081747 4756 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081751 4756 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081755 4756 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081759 4756 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081769 4756 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081779 4756 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081784 4756 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081788 4756 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081792 4756 flags.go:64] FLAG: --pod-cidr="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081796 4756 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081805 4756 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081809 4756 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081814 4756 flags.go:64] FLAG: --pods-per-core="0" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081818 4756 flags.go:64] FLAG: --port="10250" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081822 4756 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081826 4756 flags.go:64] FLAG: --provider-id="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081831 4756 flags.go:64] FLAG: --qos-reserved="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081836 4756 flags.go:64] FLAG: --read-only-port="10255" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081840 4756 flags.go:64] FLAG: --register-node="true" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081844 4756 flags.go:64] FLAG: --register-schedulable="true" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081848 4756 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081857 4756 flags.go:64] FLAG: --registry-burst="10" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081861 4756 flags.go:64] FLAG: --registry-qps="5" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081865 4756 flags.go:64] FLAG: --reserved-cpus="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081869 4756 flags.go:64] FLAG: --reserved-memory="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081875 4756 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081879 4756 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081884 4756 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081888 4756 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081892 4756 flags.go:64] FLAG: --runonce="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081896 4756 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081900 4756 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081904 4756 flags.go:64] FLAG: --seccomp-default="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081908 4756 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081912 4756 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081916 4756 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081920 4756 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081928 4756 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081932 4756 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081937 4756 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081964 4756 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081969 4756 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081973 4756 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081978 4756 flags.go:64] FLAG: --system-cgroups="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081982 4756 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081988 4756 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081992 4756 flags.go:64] FLAG: --tls-cert-file="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.081996 4756 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.082004 4756 flags.go:64] FLAG: --tls-min-version="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.082008 4756 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.082012 4756 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.082016 4756 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.082020 4756 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.082024 4756 flags.go:64] FLAG: --v="2" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.082030 4756 flags.go:64] FLAG: --version="false" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.082035 4756 flags.go:64] FLAG: --vmodule="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.082041 4756 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.082046 4756 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082194 4756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082199 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082203 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082206 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082210 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082214 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082217 4756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082221 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082224 4756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082227 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082232 4756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082239 4756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082243 4756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082247 4756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082251 4756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082254 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082258 4756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082262 4756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082272 4756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082276 4756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082280 4756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082283 4756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082287 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082290 4756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082293 4756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082297 4756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082300 4756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082303 4756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082307 4756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082310 4756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082313 4756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082317 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082321 4756 feature_gate.go:330] unrecognized feature gate: Example Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082324 4756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082327 4756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082331 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082334 4756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082337 4756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082342 4756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082346 4756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082349 4756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082353 4756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082356 4756 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082370 4756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082375 4756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082379 4756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082383 4756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082386 4756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082390 4756 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082394 4756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082398 4756 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082402 4756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082407 4756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082410 4756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082420 4756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082424 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082428 4756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082432 4756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082435 4756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082439 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082443 4756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082446 4756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082450 4756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082454 4756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082458 4756 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082462 4756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082465 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082469 4756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082472 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082476 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.082479 4756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.082485 4756 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.095049 4756 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.095081 4756 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095162 4756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095170 4756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095178 4756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095182 4756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095186 4756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095191 4756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095195 4756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095201 4756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095205 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095209 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095214 4756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095218 4756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095223 4756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095227 4756 feature_gate.go:330] unrecognized feature gate: Example Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095231 4756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095235 4756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095238 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095242 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095245 4756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095249 4756 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095253 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095257 4756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095261 4756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095264 4756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095268 4756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095272 4756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095275 4756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095278 4756 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095282 4756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095285 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095289 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095293 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095296 4756 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095303 4756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095308 4756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095313 4756 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095319 4756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095324 4756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095329 4756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095334 4756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095340 4756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095347 4756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095352 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095356 4756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095361 4756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095365 4756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095368 4756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095372 4756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095375 4756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095379 4756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095382 4756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095386 4756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095389 4756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095393 4756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095396 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095400 4756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095403 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095406 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095410 4756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095413 4756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095416 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095420 4756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095424 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095429 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095433 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095436 4756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095442 4756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095447 4756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095451 4756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095454 4756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095459 4756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.095466 4756 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095613 4756 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095627 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095633 4756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095638 4756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095642 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095648 4756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095652 4756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095656 4756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095660 4756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095663 4756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095667 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095672 4756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095676 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095680 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095684 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095688 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095691 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095695 4756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095699 4756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095702 4756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095706 4756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095709 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095713 4756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095717 4756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095721 4756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095725 4756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095728 4756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095733 4756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095737 4756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095741 4756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095744 4756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095748 4756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095752 4756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095756 4756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095761 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095764 4756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095768 4756 feature_gate.go:330] unrecognized feature gate: Example Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095771 4756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095775 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095778 4756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095783 4756 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095786 4756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095790 4756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095793 4756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095797 4756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095801 4756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095804 4756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095808 4756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095811 4756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095815 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095819 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095822 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095826 4756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095829 4756 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095832 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095837 4756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095840 4756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095844 4756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095848 4756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095852 4756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095856 4756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095860 4756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095865 4756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095871 4756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095876 4756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095880 4756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095885 4756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095890 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095894 4756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095899 4756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.095903 4756 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.095910 4756 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.096118 4756 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.098575 4756 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.098659 4756 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.099259 4756 server.go:997] "Starting client certificate rotation" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.099279 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.099457 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-10 22:52:48.605403439 +0000 UTC Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.099629 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.105431 4756 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.107275 4756 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.109574 4756 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.117897 4756 log.go:25] "Validated CRI v1 runtime API" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.134046 4756 log.go:25] "Validated CRI v1 image API" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.136863 4756 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.139528 4756 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-03-10-48-21-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.139611 4756 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.164793 4756 manager.go:217] Machine: {Timestamp:2025-12-03 10:53:09.163203306 +0000 UTC m=+0.193204590 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654116352 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:252ddd87-ab9d-46d8-a45d-0324a35cd261 BootID:a916e5b8-6e5c-4097-b971-a8f4ba12cdc7 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:08:61:79 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:08:61:79 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:22:20:93 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:69:26:ac Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7a:96:ac Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:44:6f:f2 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:06:88:54:b1:78:c3 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:7e:b3:3c:6e:2b:a1 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654116352 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.165136 4756 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.165451 4756 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.166092 4756 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.166352 4756 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.166400 4756 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.166681 4756 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.166695 4756 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.166934 4756 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.167014 4756 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.167261 4756 state_mem.go:36] "Initialized new in-memory state store" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.167503 4756 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.168259 4756 kubelet.go:418] "Attempting to sync node with API server" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.168305 4756 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.168348 4756 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.168372 4756 kubelet.go:324] "Adding apiserver pod source" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.168393 4756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.170851 4756 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.171119 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.171213 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.171411 4756 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.171761 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.171883 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.172483 4756 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.173230 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.173266 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.173277 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.173288 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.173305 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.173317 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.173327 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.173344 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.173355 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.173368 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.173405 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.173415 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.173626 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.174307 4756 server.go:1280] "Started kubelet" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.174624 4756 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.174622 4756 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.174925 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.175408 4756 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.176534 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.176588 4756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 10:53:09 crc systemd[1]: Started Kubernetes Kubelet. Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.176622 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:16:16.60227659 +0000 UTC Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.176836 4756 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.176858 4756 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.176928 4756 server.go:460] "Adding debug handlers to kubelet server" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.177054 4756 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.178089 4756 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.178804 4756 factory.go:55] Registering systemd factory Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.178846 4756 factory.go:221] Registration of the systemd container factory successfully Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.178843 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="200ms" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.179622 4756 factory.go:153] Registering CRI-O factory Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.179667 4756 factory.go:221] Registration of the crio container factory successfully Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.179754 4756 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.179888 4756 factory.go:103] Registering Raw factory Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.179862 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.179966 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.179915 4756 manager.go:1196] Started watching for new ooms in manager Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.179565 4756 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187daf21e8d647fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 10:53:09.174241276 +0000 UTC m=+0.204242530,LastTimestamp:2025-12-03 10:53:09.174241276 +0000 UTC m=+0.204242530,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.181825 4756 manager.go:319] Starting recovery of all containers Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.196271 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.196686 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.196827 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.196985 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.197135 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.197293 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.197430 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.197553 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.197678 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.197813 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.197941 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.198099 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.198231 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.198394 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.198577 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.198773 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.198994 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.199157 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.199408 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.199549 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.199687 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.199850 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.200077 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.200867 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.200920 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.200993 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201075 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201107 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201131 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201158 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201180 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201238 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201263 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201293 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201326 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201355 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201383 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201412 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201441 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201475 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201504 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201526 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201549 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201571 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201594 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201626 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201655 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201688 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201721 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201751 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201784 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201816 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201856 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201888 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.201993 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202031 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202068 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202099 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202133 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202165 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202197 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202242 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202281 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202309 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202340 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202370 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202402 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202434 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202464 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202496 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202529 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202559 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202593 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202628 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202661 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202694 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202723 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202756 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202786 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202814 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202843 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202873 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202901 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202929 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.202997 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203033 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203064 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203097 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203129 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203161 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203192 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203224 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203254 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203288 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203319 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203351 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203387 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203418 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203452 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203484 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203516 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203549 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203580 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203615 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203660 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203695 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203729 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203765 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203798 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203849 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203882 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.203918 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204084 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204126 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204161 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204192 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204223 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204252 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204287 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204316 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204350 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204380 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204409 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204439 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204468 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204501 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204530 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204557 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204590 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204620 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204652 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204682 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204715 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204744 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204782 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204809 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204838 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204868 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204897 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204926 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.204993 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205027 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205055 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205084 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205113 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205142 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205170 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205237 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205270 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205299 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205330 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205358 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205390 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205417 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205444 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205472 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205506 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205537 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205569 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205595 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205622 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205648 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205680 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205708 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205736 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205764 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205796 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205823 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205851 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205877 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.205918 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.206030 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208676 4756 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208774 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208803 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208821 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208836 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208849 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208860 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208873 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208884 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208895 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208912 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208924 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208935 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208974 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.208990 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209001 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209049 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209069 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209084 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209098 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209110 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209123 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209138 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209155 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209176 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209219 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209241 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209260 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209277 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209296 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209315 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209335 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209368 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209382 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209399 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209416 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209429 4756 reconstruct.go:97] "Volume reconstruction finished" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.209441 4756 reconciler.go:26] "Reconciler: start to sync state" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.226383 4756 manager.go:324] Recovery completed Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.230424 4756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.232523 4756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.232566 4756 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.232608 4756 kubelet.go:2335] "Starting kubelet main sync loop" Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.232670 4756 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.233753 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.233813 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.241780 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.244579 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.244625 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.244635 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.245770 4756 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.245794 4756 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.245821 4756 state_mem.go:36] "Initialized new in-memory state store" Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.278284 4756 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.304333 4756 policy_none.go:49] "None policy: Start" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.305970 4756 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.306008 4756 state_mem.go:35] "Initializing new in-memory state store" Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.333025 4756 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.371561 4756 manager.go:334] "Starting Device Plugin manager" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.371698 4756 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.371724 4756 server.go:79] "Starting device plugin registration server" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.372523 4756 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.372567 4756 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.373288 4756 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.373498 4756 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.373534 4756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.380239 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="400ms" Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.384235 4756 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.473220 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.474430 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.474509 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.474528 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.474572 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.475383 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.533553 4756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.533718 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.535155 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.535195 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.535212 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.535361 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.535879 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.536063 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.536422 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.536539 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.536562 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.536994 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.537075 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.537119 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.537779 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.537815 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.537830 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.537979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.537997 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.538009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.538570 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.538595 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.538604 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.538697 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.538987 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.539084 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.539346 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.539405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.539423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.539612 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.539944 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.540066 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.540340 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.540363 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.540372 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.540548 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.540572 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.540581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.540712 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.540734 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.541426 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.541456 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.541467 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.541663 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.541695 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.541708 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615489 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615534 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615561 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615586 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615615 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615639 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615708 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615758 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615793 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615807 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615829 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615845 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615861 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615880 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.615907 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.675793 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.677384 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.677440 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.677450 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.677477 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.678090 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.717732 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.717853 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.717897 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.717930 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.717943 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718003 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718050 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718090 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718126 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718133 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718182 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718187 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718208 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718119 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718230 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718235 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718187 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718194 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718306 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718341 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718340 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718374 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718380 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718398 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718418 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718436 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718442 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718456 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718464 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.718438 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: E1203 10:53:09.781691 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="800ms" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.877756 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.889727 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.898285 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-1e6eb263763dc6bfcecf590822dd2457138d04a495a9e76b8cedb4cc2874ecd6 WatchSource:0}: Error finding container 1e6eb263763dc6bfcecf590822dd2457138d04a495a9e76b8cedb4cc2874ecd6: Status 404 returned error can't find the container with id 1e6eb263763dc6bfcecf590822dd2457138d04a495a9e76b8cedb4cc2874ecd6 Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.904892 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-3a5835323c60d1c3a5fd79e8e98b3802c073ab611e2e4fe191d091d7f1ad764c WatchSource:0}: Error finding container 3a5835323c60d1c3a5fd79e8e98b3802c073ab611e2e4fe191d091d7f1ad764c: Status 404 returned error can't find the container with id 3a5835323c60d1c3a5fd79e8e98b3802c073ab611e2e4fe191d091d7f1ad764c Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.908021 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.929689 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: I1203 10:53:09.937122 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.947165 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-63aafdcdd7c4d8f6e586db879b86bcad88a55913f0c9d2de076bd15d0fd36a4a WatchSource:0}: Error finding container 63aafdcdd7c4d8f6e586db879b86bcad88a55913f0c9d2de076bd15d0fd36a4a: Status 404 returned error can't find the container with id 63aafdcdd7c4d8f6e586db879b86bcad88a55913f0c9d2de076bd15d0fd36a4a Dec 03 10:53:09 crc kubenswrapper[4756]: W1203 10:53:09.957171 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-44ea5af447d93fa1f7f2911dd18682aec85cb77720d84805e284ac63b0a1c5af WatchSource:0}: Error finding container 44ea5af447d93fa1f7f2911dd18682aec85cb77720d84805e284ac63b0a1c5af: Status 404 returned error can't find the container with id 44ea5af447d93fa1f7f2911dd18682aec85cb77720d84805e284ac63b0a1c5af Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.078202 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.080163 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.080216 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.080229 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.080269 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 10:53:10 crc kubenswrapper[4756]: E1203 10:53:10.081013 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Dec 03 10:53:10 crc kubenswrapper[4756]: W1203 10:53:10.095857 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Dec 03 10:53:10 crc kubenswrapper[4756]: E1203 10:53:10.095967 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Dec 03 10:53:10 crc kubenswrapper[4756]: W1203 10:53:10.148223 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Dec 03 10:53:10 crc kubenswrapper[4756]: E1203 10:53:10.148384 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Dec 03 10:53:10 crc kubenswrapper[4756]: W1203 10:53:10.163721 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Dec 03 10:53:10 crc kubenswrapper[4756]: E1203 10:53:10.163831 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.176019 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.177081 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:16:22.36187438 +0000 UTC Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.177137 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 199h23m12.184738951s for next certificate rotation Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.239318 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3a5835323c60d1c3a5fd79e8e98b3802c073ab611e2e4fe191d091d7f1ad764c"} Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.240532 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1e6eb263763dc6bfcecf590822dd2457138d04a495a9e76b8cedb4cc2874ecd6"} Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.242661 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"44ea5af447d93fa1f7f2911dd18682aec85cb77720d84805e284ac63b0a1c5af"} Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.244138 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"63aafdcdd7c4d8f6e586db879b86bcad88a55913f0c9d2de076bd15d0fd36a4a"} Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.245248 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f16b1fd27e2186e2c27ca9d4bd1de8bc1ece89d3288dc1f724cc08f830084a03"} Dec 03 10:53:10 crc kubenswrapper[4756]: E1203 10:53:10.582921 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="1.6s" Dec 03 10:53:10 crc kubenswrapper[4756]: W1203 10:53:10.589317 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Dec 03 10:53:10 crc kubenswrapper[4756]: E1203 10:53:10.589444 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.881600 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.887875 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.887978 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.888001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:10 crc kubenswrapper[4756]: I1203 10:53:10.888042 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 10:53:10 crc kubenswrapper[4756]: E1203 10:53:10.888912 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.176441 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.203812 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 10:53:11 crc kubenswrapper[4756]: E1203 10:53:11.205088 4756 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.251021 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de"} Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.251077 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0"} Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.251089 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf"} Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.251100 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754"} Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.251146 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.252206 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.252257 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.252273 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.253801 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270" exitCode=0 Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.253843 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270"} Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.254015 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.255118 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.255175 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.255194 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.255726 4756 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d3a1973df1495d137d0f43b666da34eead1b891a539390ffc04d031acee490a5" exitCode=0 Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.255821 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d3a1973df1495d137d0f43b666da34eead1b891a539390ffc04d031acee490a5"} Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.255965 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.257058 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.257095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.257105 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.257447 4756 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a0aa6c1451fb8c8270fcdd28b234cf83cf94b06fe79cd807cb3889b180a40d79" exitCode=0 Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.257521 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a0aa6c1451fb8c8270fcdd28b234cf83cf94b06fe79cd807cb3889b180a40d79"} Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.257560 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.258547 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.258666 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.258711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.258723 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.259202 4756 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3" exitCode=0 Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.259244 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.259265 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.259276 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.259297 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.259246 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3"} Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.264573 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.264611 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.264625 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:11 crc kubenswrapper[4756]: I1203 10:53:11.571143 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:11 crc kubenswrapper[4756]: E1203 10:53:11.588220 4756 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187daf21e8d647fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 10:53:09.174241276 +0000 UTC m=+0.204242530,LastTimestamp:2025-12-03 10:53:09.174241276 +0000 UTC m=+0.204242530,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 10:53:11 crc kubenswrapper[4756]: W1203 10:53:11.853702 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Dec 03 10:53:11 crc kubenswrapper[4756]: E1203 10:53:11.853821 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Dec 03 10:53:11 crc kubenswrapper[4756]: W1203 10:53:11.920997 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Dec 03 10:53:11 crc kubenswrapper[4756]: E1203 10:53:11.921117 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.265916 4756 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="94d9e927be605b83bb7053c54055470bf0c6eabdc296f27263a145b2d0a78e28" exitCode=0 Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.266004 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"94d9e927be605b83bb7053c54055470bf0c6eabdc296f27263a145b2d0a78e28"} Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.266074 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.267378 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.267428 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.267452 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.269991 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e0ccc45a014b5221a5e22a35b29df5446825770844e7e9b3e8cc01228954ca18"} Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.270013 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.271274 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.271312 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.271323 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.274307 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"39c0654e2a6bce50e493d1f0119b8ef84de2222f31a901130d30e2acdf8b6fbc"} Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.274340 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ad98945b1e87deede05a7eda2f9adaddb0b884850dad8a9d6a8d1a5e5df02d12"} Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.274355 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4fc2a0487fcc32cb6cd148976f14df4e7cf8c6e8cb06d7cc8365740484c30b61"} Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.274429 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.275521 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.275550 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.275561 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.278060 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80"} Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.278112 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.278117 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8"} Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.278216 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842"} Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.278229 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb"} Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.278936 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.278993 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.279007 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.440007 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.446402 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.489280 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.490782 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.490825 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.490837 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:12 crc kubenswrapper[4756]: I1203 10:53:12.490874 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.286661 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a"} Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.286890 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.288234 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.288263 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.288272 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.289764 4756 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1cd85aa7d94ce212503e1459b05b6681863198dcd6c7440bf16dca1ed3c9d42b" exitCode=0 Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.289906 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.289914 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.290522 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1cd85aa7d94ce212503e1459b05b6681863198dcd6c7440bf16dca1ed3c9d42b"} Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.290596 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.290794 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.290844 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.291310 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.291360 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.291367 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.291379 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.291387 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.291501 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.291393 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.291548 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.291569 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.293184 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.293232 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.293251 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:13 crc kubenswrapper[4756]: I1203 10:53:13.406311 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.295287 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2ca64c7bcb4c04c7412d3d79d70ece2b27f16d0343fe0d0ec069771f734a67ec"} Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.295330 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.295345 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.295390 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.295403 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.295337 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"00447ddce8e8cd04c24e8c4c48311968ce8c53aaed2913aa32c5b3673b9dc658"} Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.295455 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"88faa812e4c215b5d08c639d100ec61f8c5867bf2e88a2954e4156f53b3257d1"} Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.295553 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.296916 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.296971 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.296915 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.297011 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.297025 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.296989 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.297607 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.297645 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:14 crc kubenswrapper[4756]: I1203 10:53:14.297658 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:15 crc kubenswrapper[4756]: I1203 10:53:15.307888 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"011be6cdd5ebe67b7b26f6d81922abed2cfedd5c2256a7338662869c1128ca35"} Dec 03 10:53:15 crc kubenswrapper[4756]: I1203 10:53:15.307994 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b934076ce1b273fa132a066e5641af99b6e0b62f002bc20f10f0bfa4f2c1f7e4"} Dec 03 10:53:15 crc kubenswrapper[4756]: I1203 10:53:15.308163 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:15 crc kubenswrapper[4756]: I1203 10:53:15.309654 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:15 crc kubenswrapper[4756]: I1203 10:53:15.309731 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:15 crc kubenswrapper[4756]: I1203 10:53:15.309752 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:15 crc kubenswrapper[4756]: I1203 10:53:15.331420 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 03 10:53:15 crc kubenswrapper[4756]: I1203 10:53:15.508570 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 10:53:16 crc kubenswrapper[4756]: I1203 10:53:16.057469 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:16 crc kubenswrapper[4756]: I1203 10:53:16.057727 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:16 crc kubenswrapper[4756]: I1203 10:53:16.059172 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:16 crc kubenswrapper[4756]: I1203 10:53:16.059244 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:16 crc kubenswrapper[4756]: I1203 10:53:16.059260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:16 crc kubenswrapper[4756]: I1203 10:53:16.310026 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:16 crc kubenswrapper[4756]: I1203 10:53:16.310912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:16 crc kubenswrapper[4756]: I1203 10:53:16.310970 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:16 crc kubenswrapper[4756]: I1203 10:53:16.310982 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.010185 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.010480 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.010549 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.012624 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.012731 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.012760 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.313463 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.314561 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.314624 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.314645 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.451804 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.452112 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.453616 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.453660 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.453670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.566505 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.566672 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.566710 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.568665 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.568730 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:17 crc kubenswrapper[4756]: I1203 10:53:17.568751 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:18 crc kubenswrapper[4756]: I1203 10:53:18.850651 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 03 10:53:18 crc kubenswrapper[4756]: I1203 10:53:18.850846 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:18 crc kubenswrapper[4756]: I1203 10:53:18.852228 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:18 crc kubenswrapper[4756]: I1203 10:53:18.852276 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:18 crc kubenswrapper[4756]: I1203 10:53:18.852294 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:19 crc kubenswrapper[4756]: E1203 10:53:19.384721 4756 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 10:53:19 crc kubenswrapper[4756]: I1203 10:53:19.483291 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:19 crc kubenswrapper[4756]: I1203 10:53:19.483475 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:19 crc kubenswrapper[4756]: I1203 10:53:19.484798 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:19 crc kubenswrapper[4756]: I1203 10:53:19.484843 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:19 crc kubenswrapper[4756]: I1203 10:53:19.484852 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:20 crc kubenswrapper[4756]: I1203 10:53:20.452416 4756 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 10:53:20 crc kubenswrapper[4756]: I1203 10:53:20.452514 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 10:53:22 crc kubenswrapper[4756]: I1203 10:53:22.177451 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 03 10:53:22 crc kubenswrapper[4756]: E1203 10:53:22.185023 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 03 10:53:22 crc kubenswrapper[4756]: W1203 10:53:22.247789 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 10:53:22 crc kubenswrapper[4756]: I1203 10:53:22.247933 4756 trace.go:236] Trace[259773998]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 10:53:12.246) (total time: 10001ms): Dec 03 10:53:22 crc kubenswrapper[4756]: Trace[259773998]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (10:53:22.247) Dec 03 10:53:22 crc kubenswrapper[4756]: Trace[259773998]: [10.001115641s] [10.001115641s] END Dec 03 10:53:22 crc kubenswrapper[4756]: E1203 10:53:22.247990 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 03 10:53:22 crc kubenswrapper[4756]: W1203 10:53:22.433425 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 10:53:22 crc kubenswrapper[4756]: I1203 10:53:22.433557 4756 trace.go:236] Trace[172188949]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 10:53:12.432) (total time: 10001ms): Dec 03 10:53:22 crc kubenswrapper[4756]: Trace[172188949]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:53:22.433) Dec 03 10:53:22 crc kubenswrapper[4756]: Trace[172188949]: [10.001376006s] [10.001376006s] END Dec 03 10:53:22 crc kubenswrapper[4756]: E1203 10:53:22.433583 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 03 10:53:22 crc kubenswrapper[4756]: E1203 10:53:22.492661 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 03 10:53:23 crc kubenswrapper[4756]: I1203 10:53:23.193479 4756 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 10:53:23 crc kubenswrapper[4756]: I1203 10:53:23.193548 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 10:53:23 crc kubenswrapper[4756]: I1203 10:53:23.201635 4756 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 10:53:23 crc kubenswrapper[4756]: I1203 10:53:23.201716 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 10:53:25 crc kubenswrapper[4756]: I1203 10:53:25.357126 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 03 10:53:25 crc kubenswrapper[4756]: I1203 10:53:25.357294 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:25 crc kubenswrapper[4756]: I1203 10:53:25.358433 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:25 crc kubenswrapper[4756]: I1203 10:53:25.358465 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:25 crc kubenswrapper[4756]: I1203 10:53:25.358477 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:25 crc kubenswrapper[4756]: I1203 10:53:25.373841 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 03 10:53:25 crc kubenswrapper[4756]: I1203 10:53:25.692787 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:25 crc kubenswrapper[4756]: I1203 10:53:25.694296 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:25 crc kubenswrapper[4756]: I1203 10:53:25.694367 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:25 crc kubenswrapper[4756]: I1203 10:53:25.694383 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:25 crc kubenswrapper[4756]: I1203 10:53:25.694426 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 10:53:25 crc kubenswrapper[4756]: E1203 10:53:25.697892 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 03 10:53:26 crc kubenswrapper[4756]: I1203 10:53:26.062602 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:26 crc kubenswrapper[4756]: I1203 10:53:26.062804 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:26 crc kubenswrapper[4756]: I1203 10:53:26.065043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:26 crc kubenswrapper[4756]: I1203 10:53:26.065083 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:26 crc kubenswrapper[4756]: I1203 10:53:26.065094 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:26 crc kubenswrapper[4756]: I1203 10:53:26.337175 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:26 crc kubenswrapper[4756]: I1203 10:53:26.338433 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:26 crc kubenswrapper[4756]: I1203 10:53:26.338538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:26 crc kubenswrapper[4756]: I1203 10:53:26.338575 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:27 crc kubenswrapper[4756]: I1203 10:53:27.020040 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:27 crc kubenswrapper[4756]: I1203 10:53:27.021421 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:27 crc kubenswrapper[4756]: I1203 10:53:27.024863 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:27 crc kubenswrapper[4756]: I1203 10:53:27.024910 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:27 crc kubenswrapper[4756]: I1203 10:53:27.024924 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:27 crc kubenswrapper[4756]: I1203 10:53:27.028797 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:27 crc kubenswrapper[4756]: I1203 10:53:27.199944 4756 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 10:53:27 crc kubenswrapper[4756]: I1203 10:53:27.339331 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:27 crc kubenswrapper[4756]: I1203 10:53:27.340190 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:27 crc kubenswrapper[4756]: I1203 10:53:27.340223 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:27 crc kubenswrapper[4756]: I1203 10:53:27.340240 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:27 crc kubenswrapper[4756]: I1203 10:53:27.633508 4756 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.188914 4756 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.198455 4756 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.200876 4756 trace.go:236] Trace[184656745]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 10:53:15.928) (total time: 12271ms): Dec 03 10:53:28 crc kubenswrapper[4756]: Trace[184656745]: ---"Objects listed" error: 12271ms (10:53:28.200) Dec 03 10:53:28 crc kubenswrapper[4756]: Trace[184656745]: [12.271874388s] [12.271874388s] END Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.200908 4756 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.200946 4756 trace.go:236] Trace[1079297747]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 10:53:16.946) (total time: 11254ms): Dec 03 10:53:28 crc kubenswrapper[4756]: Trace[1079297747]: ---"Objects listed" error: 11253ms (10:53:28.200) Dec 03 10:53:28 crc kubenswrapper[4756]: Trace[1079297747]: [11.254110918s] [11.254110918s] END Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.200992 4756 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.231591 4756 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34278->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.231697 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34278->192.168.126.11:17697: read: connection reset by peer" Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.232057 4756 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34290->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.232131 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34290->192.168.126.11:17697: read: connection reset by peer" Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.232483 4756 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.232523 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.238820 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.245291 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.343140 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.344870 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a" exitCode=255 Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.344982 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a"} Dec 03 10:53:28 crc kubenswrapper[4756]: E1203 10:53:28.349092 4756 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:53:28 crc kubenswrapper[4756]: I1203 10:53:28.439865 4756 scope.go:117] "RemoveContainer" containerID="ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.103828 4756 csr.go:261] certificate signing request csr-sh4pt is approved, waiting to be issued Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.112605 4756 csr.go:257] certificate signing request csr-sh4pt is issued Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.178238 4756 apiserver.go:52] "Watching apiserver" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.180753 4756 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.181028 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.181365 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.181428 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.181442 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.181677 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.181923 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.182164 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.182210 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.182338 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.182455 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.183688 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.183744 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.183756 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.183843 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.184135 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.184272 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.184398 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.184814 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.184888 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.224035 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.276652 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.278125 4756 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.286492 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.300588 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.304821 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.304859 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.304881 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.304900 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.304922 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.304937 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.304972 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.304988 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305003 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305021 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305040 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305055 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305095 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305110 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305127 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305143 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305160 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305146 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305177 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305242 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305270 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305288 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305366 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305455 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305465 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305623 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305648 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305804 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305843 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.305879 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306152 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306151 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306225 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306375 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306415 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306522 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306529 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306542 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306553 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306566 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306592 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306663 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306686 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306704 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306720 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306737 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306747 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306754 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306805 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306816 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306826 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306832 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306865 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306883 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306899 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306914 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306931 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306928 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306946 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.306983 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307001 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307016 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307031 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307046 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307061 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307077 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307093 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307109 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307124 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307000 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307138 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307037 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307115 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307154 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307186 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307194 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307220 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307232 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307245 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307241 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307260 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307271 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307300 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307344 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307355 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307358 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307379 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307405 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307438 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307462 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307464 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.307490 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.308560 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.308659 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.308719 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.308907 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.308973 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.308982 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309057 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309061 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309097 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309132 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309777 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.308687 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309311 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309340 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309825 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309393 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309424 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309585 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309621 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309839 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309935 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.309980 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310012 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310063 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310094 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310119 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310152 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310182 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310207 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310234 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310265 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310286 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310312 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310340 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310345 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310366 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310391 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310420 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310451 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310474 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310501 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310537 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310564 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310586 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310614 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310644 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310675 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310696 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310723 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310757 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310787 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310811 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310840 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310869 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310893 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310921 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310967 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310997 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311091 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311123 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311147 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311230 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311260 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311292 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311323 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311355 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311384 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311406 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311434 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311462 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311495 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311529 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311559 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311582 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311611 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311641 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311672 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311697 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311724 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311752 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311788 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311817 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311851 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311874 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311903 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311931 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312305 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312343 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312373 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312401 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312426 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312457 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312483 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312510 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312536 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312566 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312591 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312623 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312673 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312709 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312736 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312766 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312795 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312820 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312848 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312877 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312905 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312928 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312976 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313007 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313032 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313061 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313090 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313113 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313141 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313167 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313195 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313574 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313805 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313840 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313868 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313901 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313931 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313975 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314005 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314057 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314089 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314112 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314138 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314167 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314192 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314221 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314288 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314320 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314343 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314373 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314416 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314441 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314471 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314502 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314529 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314559 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314592 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314622 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314646 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314676 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314705 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314732 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314773 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310487 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310650 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.310737 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311202 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311225 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311391 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311450 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311476 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311493 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311504 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311767 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311734 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.311383 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.312746 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.313191 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314419 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.316347 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.316822 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.316886 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.317485 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.317733 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.317765 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.317879 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.318012 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.318195 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.318271 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.318480 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.318598 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.318734 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.319001 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.326548 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.327034 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.327413 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.327442 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.327978 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.328308 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.328566 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.328779 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.328993 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.329618 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.329766 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.329796 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.329729 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.329980 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.330069 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.330221 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.330233 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.330261 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.330396 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.331033 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.331736 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.331734 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.331920 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.332133 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.332230 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.332381 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.332416 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.332848 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.332859 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.333267 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.335370 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.335418 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.335835 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.335880 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.336338 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.336350 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.336542 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.336567 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.336587 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.336749 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.336802 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.336834 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.336976 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.337109 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.337299 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.337313 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.337299 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.337541 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.337715 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.337892 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.337906 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.338076 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.338094 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.314885 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.338237 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.338340 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.338360 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.338622 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.338710 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.338830 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.338894 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.339131 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.339439 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.339524 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.339546 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.339671 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.339825 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.339880 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.339895 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.339937 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.340088 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:53:29.830247272 +0000 UTC m=+20.860248506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340191 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340193 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340218 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340222 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340243 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340244 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340253 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340262 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340317 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340366 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340511 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340532 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340559 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340578 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340596 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340615 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340633 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340651 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340693 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340702 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340707 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340714 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340849 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340873 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340884 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.340938 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341074 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341088 4756 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341101 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341130 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341141 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341151 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341161 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341170 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341179 4756 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341206 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341218 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341228 4756 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341245 4756 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341258 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341531 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341712 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.341991 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342009 4756 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342019 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342166 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342178 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342268 4756 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342279 4756 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342289 4756 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342298 4756 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342338 4756 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342350 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342361 4756 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342370 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342381 4756 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342419 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342429 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342439 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342450 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342459 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342489 4756 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342498 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342506 4756 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342515 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342526 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342535 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342573 4756 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342588 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342597 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342659 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342682 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342702 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342718 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342732 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342749 4756 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342764 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342776 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342788 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342801 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342814 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342827 4756 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342840 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342853 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342870 4756 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342395 4756 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342885 4756 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342900 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342913 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342926 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342939 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342970 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342984 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342997 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343009 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343022 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343035 4756 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343050 4756 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343063 4756 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343075 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343077 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343114 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343125 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343135 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343144 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343153 4756 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343162 4756 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343171 4756 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343181 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343192 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343200 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343209 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343218 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343229 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343237 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343246 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343257 4756 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343265 4756 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343274 4756 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343282 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343290 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343299 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343309 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343318 4756 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343326 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343334 4756 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343343 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343351 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343360 4756 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342018 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342433 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342457 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.342671 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.343456 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:29.84343697 +0000 UTC m=+20.873438214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343470 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343371 4756 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343561 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343573 4756 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343584 4756 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343586 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343596 4756 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.342692 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343608 4756 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343626 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343636 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.342848 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.343659 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:29.843639647 +0000 UTC m=+20.873640981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343686 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343698 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343708 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343716 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343726 4756 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343734 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343743 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343751 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343760 4756 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343772 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343781 4756 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343789 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343797 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343805 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343813 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343823 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343832 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343840 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343848 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343856 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343866 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343877 4756 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343888 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343897 4756 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343905 4756 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343913 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343920 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343929 4756 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343937 4756 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343965 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343974 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343982 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343990 4756 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.343998 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344006 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344015 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344023 4756 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344031 4756 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344039 4756 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344048 4756 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344056 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344064 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344072 4756 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344081 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344090 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344100 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344108 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344116 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344125 4756 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344136 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344267 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344515 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.344834 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.345111 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.345154 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.346283 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.346462 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.346705 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.346834 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.347251 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.347401 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.347695 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.348341 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.348447 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.348463 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.346653 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.348701 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.348702 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.348969 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.349225 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.349239 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.349272 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.330680 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.330868 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.349373 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.349914 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.351090 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.351122 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.351843 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.352706 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb"} Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.360513 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.361469 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.363134 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.365722 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.365752 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.365767 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.365847 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:29.865822109 +0000 UTC m=+20.895823453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.367302 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.367339 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.367354 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.367408 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:29.867387464 +0000 UTC m=+20.897388708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.370284 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.374538 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.375126 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.376217 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.378490 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.385147 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.387636 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.393075 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.399990 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.407584 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.416267 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.424535 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.440465 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.445422 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.445536 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.445654 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.445727 4756 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.445792 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.445869 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446032 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446111 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446179 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446246 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446312 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446378 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446447 4756 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446514 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446577 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446641 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446706 4756 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446774 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446839 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446905 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.446986 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447062 4756 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447148 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447248 4756 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.445573 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447321 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447393 4756 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447410 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447423 4756 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447438 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447451 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447462 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447474 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447484 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447497 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447511 4756 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447523 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447535 4756 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447546 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.447557 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.445537 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.451129 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.461466 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.469559 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.479322 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.484336 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.489735 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.495463 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.500405 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.505383 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.513812 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 10:53:29 crc kubenswrapper[4756]: W1203 10:53:29.514260 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-219bddc58355366c04f5a8d3a232f678f4d67364f24813262b06eca7e573da9e WatchSource:0}: Error finding container 219bddc58355366c04f5a8d3a232f678f4d67364f24813262b06eca7e573da9e: Status 404 returned error can't find the container with id 219bddc58355366c04f5a8d3a232f678f4d67364f24813262b06eca7e573da9e Dec 03 10:53:29 crc kubenswrapper[4756]: W1203 10:53:29.525574 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-15e5a6b091b2556461497378183a60f9fc0b3b4ca219a66c71db8512894222db WatchSource:0}: Error finding container 15e5a6b091b2556461497378183a60f9fc0b3b4ca219a66c71db8512894222db: Status 404 returned error can't find the container with id 15e5a6b091b2556461497378183a60f9fc0b3b4ca219a66c71db8512894222db Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.525840 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.539462 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.556171 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.575924 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.591941 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.615462 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.851379 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.851494 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.851516 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.851584 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.851636 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:30.851620242 +0000 UTC m=+21.881621476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.851686 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:53:30.851680614 +0000 UTC m=+21.881681858 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.851741 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.851761 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:30.851755427 +0000 UTC m=+21.881756671 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.950187 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bxgrk"] Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.950590 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bxgrk" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.951493 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-pppvw"] Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.951710 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.952590 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.952615 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.952629 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-27cgj"] Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.952743 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.952760 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.952770 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.952806 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:30.95279454 +0000 UTC m=+21.982795784 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.953053 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.953064 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.953072 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:29 crc kubenswrapper[4756]: E1203 10:53:29.953092 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:30.953085599 +0000 UTC m=+21.983086833 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.953258 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.954647 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.955468 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.955641 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.955688 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.955808 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.958164 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.958894 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.959086 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.959316 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.959420 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.959542 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.960108 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.961733 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.966192 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:29 crc kubenswrapper[4756]: I1203 10:53:29.978872 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.014314 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.041785 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.053483 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/85bcc5e9-f7cc-4293-ba77-2013229e14f2-hosts-file\") pod \"node-resolver-bxgrk\" (UID: \"85bcc5e9-f7cc-4293-ba77-2013229e14f2\") " pod="openshift-dns/node-resolver-bxgrk" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.053523 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j2hw\" (UniqueName: \"kubernetes.io/projected/85bcc5e9-f7cc-4293-ba77-2013229e14f2-kube-api-access-9j2hw\") pod \"node-resolver-bxgrk\" (UID: \"85bcc5e9-f7cc-4293-ba77-2013229e14f2\") " pod="openshift-dns/node-resolver-bxgrk" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.053596 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jvwg\" (UniqueName: \"kubernetes.io/projected/088d1c61-980b-42bc-82e6-0215df050158-kube-api-access-5jvwg\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.053655 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4cc39f5-d4a1-4174-8d5f-56126872107f-proxy-tls\") pod \"machine-config-daemon-pppvw\" (UID: \"f4cc39f5-d4a1-4174-8d5f-56126872107f\") " pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.053672 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/088d1c61-980b-42bc-82e6-0215df050158-os-release\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.053696 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f4cc39f5-d4a1-4174-8d5f-56126872107f-rootfs\") pod \"machine-config-daemon-pppvw\" (UID: \"f4cc39f5-d4a1-4174-8d5f-56126872107f\") " pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.053714 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/088d1c61-980b-42bc-82e6-0215df050158-cnibin\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.053755 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4cc39f5-d4a1-4174-8d5f-56126872107f-mcd-auth-proxy-config\") pod \"machine-config-daemon-pppvw\" (UID: \"f4cc39f5-d4a1-4174-8d5f-56126872107f\") " pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.053791 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkz9f\" (UniqueName: \"kubernetes.io/projected/f4cc39f5-d4a1-4174-8d5f-56126872107f-kube-api-access-fkz9f\") pod \"machine-config-daemon-pppvw\" (UID: \"f4cc39f5-d4a1-4174-8d5f-56126872107f\") " pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.053807 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/088d1c61-980b-42bc-82e6-0215df050158-tuning-conf-dir\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.053822 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/088d1c61-980b-42bc-82e6-0215df050158-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.053845 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/088d1c61-980b-42bc-82e6-0215df050158-cni-binary-copy\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.053881 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/088d1c61-980b-42bc-82e6-0215df050158-system-cni-dir\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.077138 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.094468 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.110258 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.114232 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-03 10:48:29 +0000 UTC, rotation deadline is 2026-10-24 18:25:59.875104482 +0000 UTC Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.114293 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7807h32m29.760814517s for next certificate rotation Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.123241 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.135690 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.151488 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.154850 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/088d1c61-980b-42bc-82e6-0215df050158-cni-binary-copy\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.154909 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/088d1c61-980b-42bc-82e6-0215df050158-system-cni-dir\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.154944 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/85bcc5e9-f7cc-4293-ba77-2013229e14f2-hosts-file\") pod \"node-resolver-bxgrk\" (UID: \"85bcc5e9-f7cc-4293-ba77-2013229e14f2\") " pod="openshift-dns/node-resolver-bxgrk" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.154985 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j2hw\" (UniqueName: \"kubernetes.io/projected/85bcc5e9-f7cc-4293-ba77-2013229e14f2-kube-api-access-9j2hw\") pod \"node-resolver-bxgrk\" (UID: \"85bcc5e9-f7cc-4293-ba77-2013229e14f2\") " pod="openshift-dns/node-resolver-bxgrk" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155008 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jvwg\" (UniqueName: \"kubernetes.io/projected/088d1c61-980b-42bc-82e6-0215df050158-kube-api-access-5jvwg\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155037 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4cc39f5-d4a1-4174-8d5f-56126872107f-proxy-tls\") pod \"machine-config-daemon-pppvw\" (UID: \"f4cc39f5-d4a1-4174-8d5f-56126872107f\") " pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155062 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/088d1c61-980b-42bc-82e6-0215df050158-os-release\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155087 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f4cc39f5-d4a1-4174-8d5f-56126872107f-rootfs\") pod \"machine-config-daemon-pppvw\" (UID: \"f4cc39f5-d4a1-4174-8d5f-56126872107f\") " pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155113 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4cc39f5-d4a1-4174-8d5f-56126872107f-mcd-auth-proxy-config\") pod \"machine-config-daemon-pppvw\" (UID: \"f4cc39f5-d4a1-4174-8d5f-56126872107f\") " pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155136 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/088d1c61-980b-42bc-82e6-0215df050158-cnibin\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155146 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/85bcc5e9-f7cc-4293-ba77-2013229e14f2-hosts-file\") pod \"node-resolver-bxgrk\" (UID: \"85bcc5e9-f7cc-4293-ba77-2013229e14f2\") " pod="openshift-dns/node-resolver-bxgrk" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155168 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkz9f\" (UniqueName: \"kubernetes.io/projected/f4cc39f5-d4a1-4174-8d5f-56126872107f-kube-api-access-fkz9f\") pod \"machine-config-daemon-pppvw\" (UID: \"f4cc39f5-d4a1-4174-8d5f-56126872107f\") " pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155263 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/088d1c61-980b-42bc-82e6-0215df050158-tuning-conf-dir\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155287 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f4cc39f5-d4a1-4174-8d5f-56126872107f-rootfs\") pod \"machine-config-daemon-pppvw\" (UID: \"f4cc39f5-d4a1-4174-8d5f-56126872107f\") " pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155292 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/088d1c61-980b-42bc-82e6-0215df050158-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155290 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/088d1c61-980b-42bc-82e6-0215df050158-os-release\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155036 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/088d1c61-980b-42bc-82e6-0215df050158-system-cni-dir\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155488 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/088d1c61-980b-42bc-82e6-0215df050158-cnibin\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.155679 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/088d1c61-980b-42bc-82e6-0215df050158-tuning-conf-dir\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.156187 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/088d1c61-980b-42bc-82e6-0215df050158-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.156188 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4cc39f5-d4a1-4174-8d5f-56126872107f-mcd-auth-proxy-config\") pod \"machine-config-daemon-pppvw\" (UID: \"f4cc39f5-d4a1-4174-8d5f-56126872107f\") " pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.156243 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/088d1c61-980b-42bc-82e6-0215df050158-cni-binary-copy\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.160642 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4cc39f5-d4a1-4174-8d5f-56126872107f-proxy-tls\") pod \"machine-config-daemon-pppvw\" (UID: \"f4cc39f5-d4a1-4174-8d5f-56126872107f\") " pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.169234 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.173526 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkz9f\" (UniqueName: \"kubernetes.io/projected/f4cc39f5-d4a1-4174-8d5f-56126872107f-kube-api-access-fkz9f\") pod \"machine-config-daemon-pppvw\" (UID: \"f4cc39f5-d4a1-4174-8d5f-56126872107f\") " pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.173967 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j2hw\" (UniqueName: \"kubernetes.io/projected/85bcc5e9-f7cc-4293-ba77-2013229e14f2-kube-api-access-9j2hw\") pod \"node-resolver-bxgrk\" (UID: \"85bcc5e9-f7cc-4293-ba77-2013229e14f2\") " pod="openshift-dns/node-resolver-bxgrk" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.174442 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jvwg\" (UniqueName: \"kubernetes.io/projected/088d1c61-980b-42bc-82e6-0215df050158-kube-api-access-5jvwg\") pod \"multus-additional-cni-plugins-27cgj\" (UID: \"088d1c61-980b-42bc-82e6-0215df050158\") " pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.185211 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.232450 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.248528 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.264104 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bxgrk" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.268454 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.271100 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:53:30 crc kubenswrapper[4756]: W1203 10:53:30.274300 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85bcc5e9_f7cc_4293_ba77_2013229e14f2.slice/crio-64e1cb1fb87cbfeeae9323c87a79ad5046e1c24017b20e78ccd9aeb80adf4dff WatchSource:0}: Error finding container 64e1cb1fb87cbfeeae9323c87a79ad5046e1c24017b20e78ccd9aeb80adf4dff: Status 404 returned error can't find the container with id 64e1cb1fb87cbfeeae9323c87a79ad5046e1c24017b20e78ccd9aeb80adf4dff Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.276519 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-27cgj" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.283261 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: W1203 10:53:30.283910 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4cc39f5_d4a1_4174_8d5f_56126872107f.slice/crio-e317ae301897fb1682e3eb889bc956eaf52786465e17d74c9afc351f581e5e4e WatchSource:0}: Error finding container e317ae301897fb1682e3eb889bc956eaf52786465e17d74c9afc351f581e5e4e: Status 404 returned error can't find the container with id e317ae301897fb1682e3eb889bc956eaf52786465e17d74c9afc351f581e5e4e Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.300787 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.317547 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.335528 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4xwtn"] Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.335891 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.336814 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zqms7"] Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.337517 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.337706 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.337786 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.339270 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.339407 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.340230 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.340468 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.340528 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.340542 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.340697 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.346917 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.357669 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" event={"ID":"088d1c61-980b-42bc-82e6-0215df050158","Type":"ContainerStarted","Data":"c9a55c2c8e77751f1276902a7a8f0ed857c80730cf73d8c6ee0d72d141c92a65"} Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.363082 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.363376 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151"} Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.363448 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"219bddc58355366c04f5a8d3a232f678f4d67364f24813262b06eca7e573da9e"} Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.365416 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"e317ae301897fb1682e3eb889bc956eaf52786465e17d74c9afc351f581e5e4e"} Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.367256 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bxgrk" event={"ID":"85bcc5e9-f7cc-4293-ba77-2013229e14f2","Type":"ContainerStarted","Data":"64e1cb1fb87cbfeeae9323c87a79ad5046e1c24017b20e78ccd9aeb80adf4dff"} Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.369509 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543"} Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.369564 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b"} Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.369580 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b40b57065a3ec3db8ef5a75b8f0425f259f1b71f3b266697d211c6bcfebdd3e7"} Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.371392 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"15e5a6b091b2556461497378183a60f9fc0b3b4ca219a66c71db8512894222db"} Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.377613 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.393176 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.409230 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.435279 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.451154 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457611 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-slash\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457646 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-ovn\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457664 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxd9w\" (UniqueName: \"kubernetes.io/projected/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-kube-api-access-pxd9w\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457680 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-multus-socket-dir-parent\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457695 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-var-lib-openvswitch\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457708 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-cni-bin\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457720 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-cni-netd\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457734 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-env-overrides\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457748 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-var-lib-kubelet\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457778 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-etc-openvswitch\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457795 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457812 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-multus-cni-dir\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457836 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-systemd-units\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457868 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovnkube-script-lib\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457885 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-etc-kubernetes\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457901 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-cnibin\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457916 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-cni-binary-copy\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457932 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-var-lib-cni-bin\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457946 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-var-lib-cni-multus\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.457993 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-run-netns\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458011 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovnkube-config\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458025 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tktq2\" (UniqueName: \"kubernetes.io/projected/b16dcb4b-a5dd-4081-a569-7f5a024f673b-kube-api-access-tktq2\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458047 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-run-ovn-kubernetes\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458076 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-log-socket\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458090 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-hostroot\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458104 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-os-release\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458118 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-kubelet\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458133 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-node-log\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458147 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-system-cni-dir\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458162 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-run-netns\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458188 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-multus-conf-dir\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458246 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-run-multus-certs\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458271 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovn-node-metrics-cert\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458318 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-run-k8s-cni-cncf-io\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458333 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-openvswitch\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458348 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-multus-daemon-config\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.458363 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-systemd\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.484439 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.495698 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.510859 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.526060 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.539342 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.551207 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.558837 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-run-netns\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.558885 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovnkube-config\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.558909 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-cnibin\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.558918 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-run-netns\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.558932 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-cni-binary-copy\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559002 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-var-lib-cni-bin\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559023 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-var-lib-cni-multus\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559051 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-run-ovn-kubernetes\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559067 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-var-lib-cni-bin\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559088 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tktq2\" (UniqueName: \"kubernetes.io/projected/b16dcb4b-a5dd-4081-a569-7f5a024f673b-kube-api-access-tktq2\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559121 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-log-socket\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559137 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-hostroot\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559157 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-os-release\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559165 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-run-ovn-kubernetes\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559201 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-hostroot\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559206 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-log-socket\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559143 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-var-lib-cni-multus\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559206 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-kubelet\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559181 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-kubelet\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559292 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-node-log\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559314 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-system-cni-dir\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559331 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-run-netns\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559335 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-node-log\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559350 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-multus-conf-dir\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559373 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-run-multus-certs\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559391 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-system-cni-dir\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559396 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-multus-conf-dir\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559410 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovn-node-metrics-cert\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559374 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-run-netns\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559420 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-run-multus-certs\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559549 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-run-k8s-cni-cncf-io\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559550 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-os-release\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559588 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-run-k8s-cni-cncf-io\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559632 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-systemd\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559652 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-openvswitch\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559684 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-multus-daemon-config\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559699 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-systemd\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559704 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-slash\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559724 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-slash\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559741 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-ovn\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559765 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-openvswitch\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559764 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxd9w\" (UniqueName: \"kubernetes.io/projected/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-kube-api-access-pxd9w\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559772 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovnkube-config\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559792 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-var-lib-openvswitch\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559800 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-cni-binary-copy\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559816 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-multus-socket-dir-parent\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559841 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-cni-bin\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559850 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-var-lib-openvswitch\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559863 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-cni-netd\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559885 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-env-overrides\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559900 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-multus-socket-dir-parent\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559912 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-var-lib-kubelet\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559936 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-etc-openvswitch\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559979 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559945 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-cni-bin\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.560008 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-multus-cni-dir\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559937 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-cni-netd\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.560081 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-systemd-units\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.560062 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-multus-cni-dir\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.560061 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-systemd-units\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.560111 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-etc-openvswitch\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.560123 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovnkube-script-lib\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559986 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-host-var-lib-kubelet\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.559817 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-ovn\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.560156 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-etc-kubernetes\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.560090 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.560208 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-etc-kubernetes\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.560368 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-multus-daemon-config\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.560422 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-cnibin\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.560442 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-env-overrides\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.560784 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovnkube-script-lib\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.564295 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovn-node-metrics-cert\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.566876 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.578574 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tktq2\" (UniqueName: \"kubernetes.io/projected/b16dcb4b-a5dd-4081-a569-7f5a024f673b-kube-api-access-tktq2\") pod \"ovnkube-node-zqms7\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.581306 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.583264 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxd9w\" (UniqueName: \"kubernetes.io/projected/d0dad5dd-86f8-4a8a-aed6-dd07123c5058-kube-api-access-pxd9w\") pod \"multus-4xwtn\" (UID: \"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\") " pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.613342 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.638903 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.663708 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.664854 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4xwtn" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.672446 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.689175 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: W1203 10:53:30.692708 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0dad5dd_86f8_4a8a_aed6_dd07123c5058.slice/crio-36021fe87266b8e415f417ffe7ac11cadbd6bb03130833d6a2c85111158ad52e WatchSource:0}: Error finding container 36021fe87266b8e415f417ffe7ac11cadbd6bb03130833d6a2c85111158ad52e: Status 404 returned error can't find the container with id 36021fe87266b8e415f417ffe7ac11cadbd6bb03130833d6a2c85111158ad52e Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.721148 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: W1203 10:53:30.725343 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb16dcb4b_a5dd_4081_a569_7f5a024f673b.slice/crio-b31490848c54bc40b0a198254faf6c7d14461bb2c383edd54cc35e7e9401db79 WatchSource:0}: Error finding container b31490848c54bc40b0a198254faf6c7d14461bb2c383edd54cc35e7e9401db79: Status 404 returned error can't find the container with id b31490848c54bc40b0a198254faf6c7d14461bb2c383edd54cc35e7e9401db79 Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.754374 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.776074 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.791507 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.805786 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.821487 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.838890 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.851551 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.863290 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.863414 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.863439 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:30 crc kubenswrapper[4756]: E1203 10:53:30.863510 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:53:30 crc kubenswrapper[4756]: E1203 10:53:30.863563 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:32.863546699 +0000 UTC m=+23.893547943 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:53:30 crc kubenswrapper[4756]: E1203 10:53:30.863767 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:53:30 crc kubenswrapper[4756]: E1203 10:53:30.863812 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:53:32.863778877 +0000 UTC m=+23.893780111 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:53:30 crc kubenswrapper[4756]: E1203 10:53:30.863852 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:32.863841449 +0000 UTC m=+23.893842803 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.870003 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.964681 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:30 crc kubenswrapper[4756]: I1203 10:53:30.964720 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:30 crc kubenswrapper[4756]: E1203 10:53:30.964866 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:53:30 crc kubenswrapper[4756]: E1203 10:53:30.964908 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:53:30 crc kubenswrapper[4756]: E1203 10:53:30.964875 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:53:30 crc kubenswrapper[4756]: E1203 10:53:30.964922 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:30 crc kubenswrapper[4756]: E1203 10:53:30.964932 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:53:30 crc kubenswrapper[4756]: E1203 10:53:30.964945 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:30 crc kubenswrapper[4756]: E1203 10:53:30.965004 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:32.964983506 +0000 UTC m=+23.994984810 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:30 crc kubenswrapper[4756]: E1203 10:53:30.965026 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:32.965019118 +0000 UTC m=+23.995020362 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.233894 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.233894 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:31 crc kubenswrapper[4756]: E1203 10:53:31.234369 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:31 crc kubenswrapper[4756]: E1203 10:53:31.234482 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.233922 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:31 crc kubenswrapper[4756]: E1203 10:53:31.234606 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.238501 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.239236 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.240648 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.241328 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.243245 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.243836 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.245008 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.245607 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.247282 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.247883 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.249083 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.251010 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.251714 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.252499 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.253460 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.254089 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.255087 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.255786 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.256528 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.257216 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.260381 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.261282 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.261831 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.263200 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.263730 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.265128 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.266000 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.266479 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.267100 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.268006 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.268544 4756 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.268641 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.270722 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.271392 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.271918 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.274919 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.275805 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.276911 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.278259 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.279456 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.280083 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.281029 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.282111 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.283218 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.284169 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.285149 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.285702 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.286788 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.287368 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.288555 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.289087 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.289600 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.290722 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.291215 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.376908 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054"} Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.376997 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f"} Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.378473 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bxgrk" event={"ID":"85bcc5e9-f7cc-4293-ba77-2013229e14f2","Type":"ContainerStarted","Data":"16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d"} Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.380115 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xwtn" event={"ID":"d0dad5dd-86f8-4a8a-aed6-dd07123c5058","Type":"ContainerStarted","Data":"c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c"} Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.380328 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xwtn" event={"ID":"d0dad5dd-86f8-4a8a-aed6-dd07123c5058","Type":"ContainerStarted","Data":"36021fe87266b8e415f417ffe7ac11cadbd6bb03130833d6a2c85111158ad52e"} Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.381586 4756 generic.go:334] "Generic (PLEG): container finished" podID="088d1c61-980b-42bc-82e6-0215df050158" containerID="8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd" exitCode=0 Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.381654 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" event={"ID":"088d1c61-980b-42bc-82e6-0215df050158","Type":"ContainerDied","Data":"8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd"} Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.383117 4756 generic.go:334] "Generic (PLEG): container finished" podID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerID="64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509" exitCode=0 Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.383228 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerDied","Data":"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509"} Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.383274 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerStarted","Data":"b31490848c54bc40b0a198254faf6c7d14461bb2c383edd54cc35e7e9401db79"} Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.396754 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.414422 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.429269 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.444030 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.462974 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.482988 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.500493 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.518763 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.529763 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.550282 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.567056 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.581628 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.595117 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.610171 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.625237 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.638036 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.658548 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.674021 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.685421 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.698469 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.709197 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.752487 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.789249 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.827247 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.868821 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:31 crc kubenswrapper[4756]: I1203 10:53:31.908538 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:31Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.098445 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.100304 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.100357 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.100372 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.100502 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.106894 4756 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.107146 4756 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.108004 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.108038 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.108049 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.108066 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.108077 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:32Z","lastTransitionTime":"2025-12-03T10:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.131028 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.134075 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.134111 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.134124 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.134140 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.134149 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:32Z","lastTransitionTime":"2025-12-03T10:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.146466 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.149374 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.149410 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.149423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.149441 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.149457 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:32Z","lastTransitionTime":"2025-12-03T10:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.160346 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.163066 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.163096 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.163104 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.163119 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.163127 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:32Z","lastTransitionTime":"2025-12-03T10:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.175702 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.178940 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.178993 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.179006 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.179022 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.179032 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:32Z","lastTransitionTime":"2025-12-03T10:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.189532 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.189667 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.190875 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.190910 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.190921 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.190936 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.190945 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:32Z","lastTransitionTime":"2025-12-03T10:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.293347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.293389 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.293399 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.293416 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.293427 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:32Z","lastTransitionTime":"2025-12-03T10:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.388449 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.390084 4756 generic.go:334] "Generic (PLEG): container finished" podID="088d1c61-980b-42bc-82e6-0215df050158" containerID="d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479" exitCode=0 Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.390179 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" event={"ID":"088d1c61-980b-42bc-82e6-0215df050158","Type":"ContainerDied","Data":"d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.393707 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerStarted","Data":"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.393744 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerStarted","Data":"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.393758 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerStarted","Data":"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.393770 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerStarted","Data":"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.393781 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerStarted","Data":"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.393794 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerStarted","Data":"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.395861 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.395890 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.395900 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.395920 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.395932 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:32Z","lastTransitionTime":"2025-12-03T10:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.403937 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.414944 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.464839 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.476916 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.490082 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.505233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.505270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.505281 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.505299 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.505309 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:32Z","lastTransitionTime":"2025-12-03T10:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.505673 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.550731 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.564790 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.577598 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.593846 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.604416 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-2qbq7"] Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.604860 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2qbq7" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.606015 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.607197 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.607817 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.608650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.608682 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.608692 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.608706 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.608717 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:32Z","lastTransitionTime":"2025-12-03T10:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.609763 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.611524 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.624076 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.636418 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.647656 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.661176 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.673368 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.681594 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/05365c24-b0af-4a09-b576-8245a5ea7512-serviceca\") pod \"node-ca-2qbq7\" (UID: \"05365c24-b0af-4a09-b576-8245a5ea7512\") " pod="openshift-image-registry/node-ca-2qbq7" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.682078 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ctrh\" (UniqueName: \"kubernetes.io/projected/05365c24-b0af-4a09-b576-8245a5ea7512-kube-api-access-7ctrh\") pod \"node-ca-2qbq7\" (UID: \"05365c24-b0af-4a09-b576-8245a5ea7512\") " pod="openshift-image-registry/node-ca-2qbq7" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.682123 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05365c24-b0af-4a09-b576-8245a5ea7512-host\") pod \"node-ca-2qbq7\" (UID: \"05365c24-b0af-4a09-b576-8245a5ea7512\") " pod="openshift-image-registry/node-ca-2qbq7" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.711502 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.711537 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.711546 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.711563 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.711573 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:32Z","lastTransitionTime":"2025-12-03T10:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.717318 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.749010 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.782573 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05365c24-b0af-4a09-b576-8245a5ea7512-host\") pod \"node-ca-2qbq7\" (UID: \"05365c24-b0af-4a09-b576-8245a5ea7512\") " pod="openshift-image-registry/node-ca-2qbq7" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.782620 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/05365c24-b0af-4a09-b576-8245a5ea7512-serviceca\") pod \"node-ca-2qbq7\" (UID: \"05365c24-b0af-4a09-b576-8245a5ea7512\") " pod="openshift-image-registry/node-ca-2qbq7" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.782649 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ctrh\" (UniqueName: \"kubernetes.io/projected/05365c24-b0af-4a09-b576-8245a5ea7512-kube-api-access-7ctrh\") pod \"node-ca-2qbq7\" (UID: \"05365c24-b0af-4a09-b576-8245a5ea7512\") " pod="openshift-image-registry/node-ca-2qbq7" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.782721 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05365c24-b0af-4a09-b576-8245a5ea7512-host\") pod \"node-ca-2qbq7\" (UID: \"05365c24-b0af-4a09-b576-8245a5ea7512\") " pod="openshift-image-registry/node-ca-2qbq7" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.783832 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/05365c24-b0af-4a09-b576-8245a5ea7512-serviceca\") pod \"node-ca-2qbq7\" (UID: \"05365c24-b0af-4a09-b576-8245a5ea7512\") " pod="openshift-image-registry/node-ca-2qbq7" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.786479 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.813741 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.813772 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.813780 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.813794 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.813803 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:32Z","lastTransitionTime":"2025-12-03T10:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.816436 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ctrh\" (UniqueName: \"kubernetes.io/projected/05365c24-b0af-4a09-b576-8245a5ea7512-kube-api-access-7ctrh\") pod \"node-ca-2qbq7\" (UID: \"05365c24-b0af-4a09-b576-8245a5ea7512\") " pod="openshift-image-registry/node-ca-2qbq7" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.848892 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.883536 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.883662 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.883693 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.883740 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:53:36.883711396 +0000 UTC m=+27.913712640 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.883793 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.883871 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:36.88384901 +0000 UTC m=+27.913850324 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.883900 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.884027 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:36.884000966 +0000 UTC m=+27.914002230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.888403 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.916522 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.916563 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.916573 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.916592 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.916613 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:32Z","lastTransitionTime":"2025-12-03T10:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.917015 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2qbq7" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.926121 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.967662 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.984802 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:32 crc kubenswrapper[4756]: I1203 10:53:32.984838 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.984977 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.984997 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.985010 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.985037 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.985068 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.985082 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.985069 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:36.985050389 +0000 UTC m=+28.015051623 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:32 crc kubenswrapper[4756]: E1203 10:53:32.985154 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:36.985134572 +0000 UTC m=+28.015135816 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.011767 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.018587 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.018622 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.018631 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.018650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.018659 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:33Z","lastTransitionTime":"2025-12-03T10:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.048274 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.088914 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.121046 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.121088 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.121098 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.121114 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.121124 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:33Z","lastTransitionTime":"2025-12-03T10:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.131415 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.223804 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.223848 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.223860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.223875 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.223884 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:33Z","lastTransitionTime":"2025-12-03T10:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.233474 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.233536 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.233474 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:33 crc kubenswrapper[4756]: E1203 10:53:33.233598 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:33 crc kubenswrapper[4756]: E1203 10:53:33.233651 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:33 crc kubenswrapper[4756]: E1203 10:53:33.233705 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.326297 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.326339 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.326348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.326372 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.326383 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:33Z","lastTransitionTime":"2025-12-03T10:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.398294 4756 generic.go:334] "Generic (PLEG): container finished" podID="088d1c61-980b-42bc-82e6-0215df050158" containerID="461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55" exitCode=0 Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.398368 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" event={"ID":"088d1c61-980b-42bc-82e6-0215df050158","Type":"ContainerDied","Data":"461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55"} Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.400704 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2qbq7" event={"ID":"05365c24-b0af-4a09-b576-8245a5ea7512","Type":"ContainerStarted","Data":"86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6"} Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.400741 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2qbq7" event={"ID":"05365c24-b0af-4a09-b576-8245a5ea7512","Type":"ContainerStarted","Data":"4fc240b89f4ccc4fd7268f4877bb5ae7279e465a9344ee9218bc85fe4d388dbd"} Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.418387 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.429214 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.429265 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.429277 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.429297 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.429309 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:33Z","lastTransitionTime":"2025-12-03T10:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.432685 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.444446 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.462380 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.476897 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.499173 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.513591 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.523591 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.535575 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.535612 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.535623 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.535641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.535652 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:33Z","lastTransitionTime":"2025-12-03T10:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.537226 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.547816 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.568647 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.613187 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.639043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.639420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.639428 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.639445 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.639456 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:33Z","lastTransitionTime":"2025-12-03T10:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.648824 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.690783 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.727572 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.741975 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.742014 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.742025 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.742047 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.742060 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:33Z","lastTransitionTime":"2025-12-03T10:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.771690 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.810300 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.844554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.844582 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.844590 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.844605 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.844615 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:33Z","lastTransitionTime":"2025-12-03T10:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.855694 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.888630 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.926966 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.949497 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.949563 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.949575 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.949650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.949724 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:33Z","lastTransitionTime":"2025-12-03T10:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:33 crc kubenswrapper[4756]: I1203 10:53:33.970789 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.014582 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.051835 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.051879 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.051889 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.051908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.051919 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:34Z","lastTransitionTime":"2025-12-03T10:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.075648 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.127507 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.143850 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.154399 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.154439 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.154450 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.154469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.154482 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:34Z","lastTransitionTime":"2025-12-03T10:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.171361 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.208030 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.249001 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.256796 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.256834 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.256843 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.256858 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.256869 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:34Z","lastTransitionTime":"2025-12-03T10:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.359139 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.359175 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.359185 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.359199 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.359207 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:34Z","lastTransitionTime":"2025-12-03T10:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.407218 4756 generic.go:334] "Generic (PLEG): container finished" podID="088d1c61-980b-42bc-82e6-0215df050158" containerID="6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd" exitCode=0 Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.407332 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" event={"ID":"088d1c61-980b-42bc-82e6-0215df050158","Type":"ContainerDied","Data":"6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd"} Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.413329 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerStarted","Data":"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce"} Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.427528 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.448090 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.463124 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.463208 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.463234 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.463271 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.463299 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:34Z","lastTransitionTime":"2025-12-03T10:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.466680 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.481126 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.496919 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.512206 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.529781 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.567118 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.567165 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.567176 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.567197 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.567209 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:34Z","lastTransitionTime":"2025-12-03T10:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.572986 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.616413 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.649560 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.670077 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.670119 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.670132 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.670152 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.670164 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:34Z","lastTransitionTime":"2025-12-03T10:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.692641 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.727851 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.771000 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.772625 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.772651 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.772662 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.772681 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.772691 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:34Z","lastTransitionTime":"2025-12-03T10:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.809727 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:34Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.874771 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.874815 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.874826 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.874842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.874853 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:34Z","lastTransitionTime":"2025-12-03T10:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.977022 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.977071 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.977086 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.977105 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:34 crc kubenswrapper[4756]: I1203 10:53:34.977119 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:34Z","lastTransitionTime":"2025-12-03T10:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.080315 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.080395 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.080427 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.080465 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.080502 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:35Z","lastTransitionTime":"2025-12-03T10:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.184145 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.184191 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.184208 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.184231 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.184248 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:35Z","lastTransitionTime":"2025-12-03T10:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.233211 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.233257 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:35 crc kubenswrapper[4756]: E1203 10:53:35.233812 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.233269 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:35 crc kubenswrapper[4756]: E1203 10:53:35.233892 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:35 crc kubenswrapper[4756]: E1203 10:53:35.233946 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.286922 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.286978 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.286993 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.287011 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.287024 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:35Z","lastTransitionTime":"2025-12-03T10:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.389598 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.389642 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.389650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.389665 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.389675 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:35Z","lastTransitionTime":"2025-12-03T10:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.419677 4756 generic.go:334] "Generic (PLEG): container finished" podID="088d1c61-980b-42bc-82e6-0215df050158" containerID="0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf" exitCode=0 Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.419725 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" event={"ID":"088d1c61-980b-42bc-82e6-0215df050158","Type":"ContainerDied","Data":"0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf"} Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.437862 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:35Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.454703 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:35Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.473107 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:35Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.486788 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:35Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.492660 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.492693 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.492703 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.492719 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.492730 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:35Z","lastTransitionTime":"2025-12-03T10:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.501842 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:35Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.519382 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:35Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.538345 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:35Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.562117 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:35Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.578232 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:35Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.592074 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:35Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.595045 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.595092 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.595104 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.595124 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.595138 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:35Z","lastTransitionTime":"2025-12-03T10:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.607842 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:35Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.619629 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:35Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.630896 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:35Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.646109 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:35Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.697738 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.697792 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.697803 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.697823 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.697835 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:35Z","lastTransitionTime":"2025-12-03T10:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.801061 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.801106 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.801118 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.801137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.801151 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:35Z","lastTransitionTime":"2025-12-03T10:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.905032 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.905075 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.905085 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.905101 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:35 crc kubenswrapper[4756]: I1203 10:53:35.905112 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:35Z","lastTransitionTime":"2025-12-03T10:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.008682 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.008776 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.008795 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.008823 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.008843 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:36Z","lastTransitionTime":"2025-12-03T10:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.111132 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.111164 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.111175 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.111192 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.111203 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:36Z","lastTransitionTime":"2025-12-03T10:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.213482 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.213525 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.213537 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.213555 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.213567 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:36Z","lastTransitionTime":"2025-12-03T10:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.316890 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.317000 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.317019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.317045 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.317062 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:36Z","lastTransitionTime":"2025-12-03T10:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.419764 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.419817 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.419830 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.419850 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.419861 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:36Z","lastTransitionTime":"2025-12-03T10:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.431382 4756 generic.go:334] "Generic (PLEG): container finished" podID="088d1c61-980b-42bc-82e6-0215df050158" containerID="558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78" exitCode=0 Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.431456 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" event={"ID":"088d1c61-980b-42bc-82e6-0215df050158","Type":"ContainerDied","Data":"558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78"} Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.451756 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:36Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.473342 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:36Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.484993 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:36Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.499491 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:36Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.511722 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:36Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.522981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.523015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.523025 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.523041 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.523052 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:36Z","lastTransitionTime":"2025-12-03T10:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.523710 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:36Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.539266 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:36Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.555419 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:36Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.576768 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:36Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.592491 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:36Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.604256 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:36Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.616289 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:36Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.626128 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.626172 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.626183 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.626202 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.626216 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:36Z","lastTransitionTime":"2025-12-03T10:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.626792 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:36Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.635572 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:36Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.728243 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.728276 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.728287 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.728303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.728314 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:36Z","lastTransitionTime":"2025-12-03T10:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.830559 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.830835 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.830843 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.830859 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.830872 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:36Z","lastTransitionTime":"2025-12-03T10:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.926350 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:53:36 crc kubenswrapper[4756]: E1203 10:53:36.926510 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:53:44.926482297 +0000 UTC m=+35.956483541 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.926481 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.926556 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:36 crc kubenswrapper[4756]: E1203 10:53:36.926645 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:53:36 crc kubenswrapper[4756]: E1203 10:53:36.926650 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:53:36 crc kubenswrapper[4756]: E1203 10:53:36.926694 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:44.926683823 +0000 UTC m=+35.956685077 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:53:36 crc kubenswrapper[4756]: E1203 10:53:36.926710 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:44.926702304 +0000 UTC m=+35.956703558 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.933165 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.933207 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.933221 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.933239 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:36 crc kubenswrapper[4756]: I1203 10:53:36.933250 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:36Z","lastTransitionTime":"2025-12-03T10:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.027405 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.027447 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:37 crc kubenswrapper[4756]: E1203 10:53:37.027591 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:53:37 crc kubenswrapper[4756]: E1203 10:53:37.027607 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:53:37 crc kubenswrapper[4756]: E1203 10:53:37.027618 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:37 crc kubenswrapper[4756]: E1203 10:53:37.027656 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:53:37 crc kubenswrapper[4756]: E1203 10:53:37.027703 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:53:37 crc kubenswrapper[4756]: E1203 10:53:37.027717 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:37 crc kubenswrapper[4756]: E1203 10:53:37.027673 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:45.027657919 +0000 UTC m=+36.057659163 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:37 crc kubenswrapper[4756]: E1203 10:53:37.027807 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:45.027784193 +0000 UTC m=+36.057785527 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.035452 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.035485 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.035496 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.035514 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.035529 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:37Z","lastTransitionTime":"2025-12-03T10:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.174331 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.174393 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.174411 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.174436 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.174453 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:37Z","lastTransitionTime":"2025-12-03T10:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.233430 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.233471 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.233543 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:37 crc kubenswrapper[4756]: E1203 10:53:37.233682 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:37 crc kubenswrapper[4756]: E1203 10:53:37.233986 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:37 crc kubenswrapper[4756]: E1203 10:53:37.234208 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.277712 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.277763 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.277772 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.277790 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.277800 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:37Z","lastTransitionTime":"2025-12-03T10:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.379699 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.379737 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.379746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.379760 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.379770 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:37Z","lastTransitionTime":"2025-12-03T10:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.437397 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" event={"ID":"088d1c61-980b-42bc-82e6-0215df050158","Type":"ContainerStarted","Data":"de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33"} Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.442075 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerStarted","Data":"2e43b0a66d58e44ea85211ded5f22442e6d66edaaa1d08f0450ca3aed7a4b18a"} Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.442362 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.442385 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.451658 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.464696 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.466941 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.467081 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.475385 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.482027 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.482081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.482099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.482122 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.482142 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:37Z","lastTransitionTime":"2025-12-03T10:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.486519 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.499283 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.512540 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.525876 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.538228 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.555796 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.568586 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.579276 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.584774 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.584814 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.584824 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.584840 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.584854 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:37Z","lastTransitionTime":"2025-12-03T10:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.590342 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.600442 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.611168 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.626080 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.639862 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.657830 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.672420 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.685895 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.687896 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.687927 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.687939 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.687974 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.687989 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:37Z","lastTransitionTime":"2025-12-03T10:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.691141 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.703787 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.718354 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.733625 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.748154 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.762167 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.775176 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.790154 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.790208 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.790220 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.790240 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.790254 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:37Z","lastTransitionTime":"2025-12-03T10:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.790267 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.803879 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.822292 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e43b0a66d58e44ea85211ded5f22442e6d66edaaa1d08f0450ca3aed7a4b18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:37Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.893260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.893304 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.893314 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.893332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.893346 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:37Z","lastTransitionTime":"2025-12-03T10:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.995943 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.996013 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.996073 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.996091 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:37 crc kubenswrapper[4756]: I1203 10:53:37.996101 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:37Z","lastTransitionTime":"2025-12-03T10:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.098536 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.098572 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.098580 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.098594 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.098604 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:38Z","lastTransitionTime":"2025-12-03T10:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.201044 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.201078 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.201086 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.201101 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.201111 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:38Z","lastTransitionTime":"2025-12-03T10:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.304223 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.304296 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.304315 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.304338 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.304351 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:38Z","lastTransitionTime":"2025-12-03T10:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.406679 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.406740 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.406765 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.406788 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.406804 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:38Z","lastTransitionTime":"2025-12-03T10:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.509712 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.509759 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.509771 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.509791 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.509805 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:38Z","lastTransitionTime":"2025-12-03T10:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.612683 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.612727 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.612736 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.612752 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.612761 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:38Z","lastTransitionTime":"2025-12-03T10:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.716091 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.716137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.716149 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.716164 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.716175 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:38Z","lastTransitionTime":"2025-12-03T10:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.818927 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.818997 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.819009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.819027 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.819039 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:38Z","lastTransitionTime":"2025-12-03T10:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.921245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.921662 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.921722 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.921810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:38 crc kubenswrapper[4756]: I1203 10:53:38.921888 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:38Z","lastTransitionTime":"2025-12-03T10:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.024617 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.024655 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.024664 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.024677 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.024686 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:39Z","lastTransitionTime":"2025-12-03T10:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.100284 4756 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.127686 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.127716 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.127728 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.127742 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.127752 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:39Z","lastTransitionTime":"2025-12-03T10:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.229796 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.229895 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.229921 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.230023 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.230064 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:39Z","lastTransitionTime":"2025-12-03T10:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.233053 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.233122 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:39 crc kubenswrapper[4756]: E1203 10:53:39.233203 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.233301 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:39 crc kubenswrapper[4756]: E1203 10:53:39.233437 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:39 crc kubenswrapper[4756]: E1203 10:53:39.233738 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.254673 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.270789 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.282876 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.298431 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.311641 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.331429 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e43b0a66d58e44ea85211ded5f22442e6d66edaaa1d08f0450ca3aed7a4b18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.332616 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.332735 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.332806 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.332877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.332943 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:39Z","lastTransitionTime":"2025-12-03T10:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.343051 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.365471 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.378248 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.391512 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.402338 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.414156 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.429505 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.434921 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.434992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.435006 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.435028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.435042 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:39Z","lastTransitionTime":"2025-12-03T10:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.442146 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.450284 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/0.log" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.452850 4756 generic.go:334] "Generic (PLEG): container finished" podID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerID="2e43b0a66d58e44ea85211ded5f22442e6d66edaaa1d08f0450ca3aed7a4b18a" exitCode=1 Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.452898 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerDied","Data":"2e43b0a66d58e44ea85211ded5f22442e6d66edaaa1d08f0450ca3aed7a4b18a"} Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.453540 4756 scope.go:117] "RemoveContainer" containerID="2e43b0a66d58e44ea85211ded5f22442e6d66edaaa1d08f0450ca3aed7a4b18a" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.466886 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.478349 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.486924 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.499310 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e43b0a66d58e44ea85211ded5f22442e6d66edaaa1d08f0450ca3aed7a4b18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e43b0a66d58e44ea85211ded5f22442e6d66edaaa1d08f0450ca3aed7a4b18a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:53:38.883502 6033 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 10:53:38.883534 6033 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 10:53:38.883578 6033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 10:53:38.883586 6033 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 10:53:38.883594 6033 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 10:53:38.883631 6033 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 10:53:38.883637 6033 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 10:53:38.883646 6033 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 10:53:38.883649 6033 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 10:53:38.883661 6033 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 10:53:38.883656 6033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 10:53:38.883670 6033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 10:53:38.883676 6033 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 10:53:38.883732 6033 factory.go:656] Stopping watch factory\\\\nI1203 10:53:38.883752 6033 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.519495 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.533675 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.537989 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.538037 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.538049 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.538068 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.538083 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:39Z","lastTransitionTime":"2025-12-03T10:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.547342 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.560186 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.571118 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.588015 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.603782 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.620128 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.634330 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.640670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.640710 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.640719 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.640736 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.640746 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:39Z","lastTransitionTime":"2025-12-03T10:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.648812 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.661562 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.674723 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.689264 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.703307 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.722027 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.741242 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.747531 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.747596 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.747617 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.747647 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.747665 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:39Z","lastTransitionTime":"2025-12-03T10:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.773826 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e43b0a66d58e44ea85211ded5f22442e6d66edaaa1d08f0450ca3aed7a4b18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e43b0a66d58e44ea85211ded5f22442e6d66edaaa1d08f0450ca3aed7a4b18a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:53:38.883502 6033 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 10:53:38.883534 6033 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 10:53:38.883578 6033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 10:53:38.883586 6033 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 10:53:38.883594 6033 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 10:53:38.883631 6033 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 10:53:38.883637 6033 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 10:53:38.883646 6033 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 10:53:38.883649 6033 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 10:53:38.883661 6033 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 10:53:38.883656 6033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 10:53:38.883670 6033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 10:53:38.883676 6033 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 10:53:38.883732 6033 factory.go:656] Stopping watch factory\\\\nI1203 10:53:38.883752 6033 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.792609 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.809548 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.827879 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.843677 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.850236 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.850288 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.850307 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.850330 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.850344 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:39Z","lastTransitionTime":"2025-12-03T10:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.858163 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.875272 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.890810 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.908169 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.953569 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.953608 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.953625 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.953653 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:39 crc kubenswrapper[4756]: I1203 10:53:39.953671 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:39Z","lastTransitionTime":"2025-12-03T10:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.056748 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.056803 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.056813 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.056833 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.056850 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:40Z","lastTransitionTime":"2025-12-03T10:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.160145 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.160248 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.160261 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.160282 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.160297 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:40Z","lastTransitionTime":"2025-12-03T10:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.264609 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.264666 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.264680 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.264701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.264713 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:40Z","lastTransitionTime":"2025-12-03T10:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.367468 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.367528 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.367544 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.367573 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.367592 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:40Z","lastTransitionTime":"2025-12-03T10:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.461322 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/0.log" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.466019 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerStarted","Data":"1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396"} Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.466730 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.470601 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.470675 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.470701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.470732 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.470759 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:40Z","lastTransitionTime":"2025-12-03T10:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.488783 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:40Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.513141 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:40Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.535259 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:40Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.556048 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:40Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.569371 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:40Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.573931 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.573995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.574006 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.574029 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.574045 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:40Z","lastTransitionTime":"2025-12-03T10:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.586362 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:40Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.604785 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:40Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.620178 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:40Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.634480 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:40Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.652158 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:40Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.667420 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:40Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.678382 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.678466 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.678478 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.678502 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.678514 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:40Z","lastTransitionTime":"2025-12-03T10:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.687020 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:40Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.704173 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:40Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.730039 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e43b0a66d58e44ea85211ded5f22442e6d66edaaa1d08f0450ca3aed7a4b18a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:53:38.883502 6033 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 10:53:38.883534 6033 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 10:53:38.883578 6033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 10:53:38.883586 6033 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 10:53:38.883594 6033 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 10:53:38.883631 6033 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 10:53:38.883637 6033 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 10:53:38.883646 6033 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 10:53:38.883649 6033 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 10:53:38.883661 6033 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 10:53:38.883656 6033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 10:53:38.883670 6033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 10:53:38.883676 6033 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 10:53:38.883732 6033 factory.go:656] Stopping watch factory\\\\nI1203 10:53:38.883752 6033 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:40Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.782324 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.782376 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.782390 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.782411 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.782425 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:40Z","lastTransitionTime":"2025-12-03T10:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.886221 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.886301 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.886322 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.886349 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.886369 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:40Z","lastTransitionTime":"2025-12-03T10:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.989464 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.989754 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.989945 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.990163 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:40 crc kubenswrapper[4756]: I1203 10:53:40.990348 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:40Z","lastTransitionTime":"2025-12-03T10:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.092731 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.092998 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.093092 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.093348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.093406 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:41Z","lastTransitionTime":"2025-12-03T10:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.196807 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.197510 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.197551 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.197572 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.197583 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:41Z","lastTransitionTime":"2025-12-03T10:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.233428 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:41 crc kubenswrapper[4756]: E1203 10:53:41.233573 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.234009 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:41 crc kubenswrapper[4756]: E1203 10:53:41.234081 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.234159 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:41 crc kubenswrapper[4756]: E1203 10:53:41.234239 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.300885 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.300985 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.301000 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.301021 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.301035 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:41Z","lastTransitionTime":"2025-12-03T10:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.404667 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.404715 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.404725 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.404746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.404757 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:41Z","lastTransitionTime":"2025-12-03T10:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.470982 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/1.log" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.471780 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/0.log" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.475035 4756 generic.go:334] "Generic (PLEG): container finished" podID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerID="1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396" exitCode=1 Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.475086 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerDied","Data":"1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396"} Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.475139 4756 scope.go:117] "RemoveContainer" containerID="2e43b0a66d58e44ea85211ded5f22442e6d66edaaa1d08f0450ca3aed7a4b18a" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.475827 4756 scope.go:117] "RemoveContainer" containerID="1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396" Dec 03 10:53:41 crc kubenswrapper[4756]: E1203 10:53:41.476017 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.491465 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:41Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.502940 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:41Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.507308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.507361 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.507379 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.507403 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.507419 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:41Z","lastTransitionTime":"2025-12-03T10:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.515178 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:41Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.527545 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:41Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.543069 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:41Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.554685 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:41Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.576792 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:41Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.588844 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:41Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.609639 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e43b0a66d58e44ea85211ded5f22442e6d66edaaa1d08f0450ca3aed7a4b18a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:53:38.883502 6033 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 10:53:38.883534 6033 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 10:53:38.883578 6033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 10:53:38.883586 6033 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 10:53:38.883594 6033 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 10:53:38.883631 6033 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 10:53:38.883637 6033 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 10:53:38.883646 6033 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 10:53:38.883649 6033 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 10:53:38.883661 6033 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 10:53:38.883656 6033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 10:53:38.883670 6033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 10:53:38.883676 6033 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 10:53:38.883732 6033 factory.go:656] Stopping watch factory\\\\nI1203 10:53:38.883752 6033 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:41Z\\\",\\\"message\\\":\\\"_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 10:53:41.387138 6166 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1203 10:53:41.387134 6166 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 10:53:41.386925 6166 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:41Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.610031 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.610059 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.610076 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.610097 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.610114 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:41Z","lastTransitionTime":"2025-12-03T10:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.619049 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:41Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.627750 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:41Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.640107 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:41Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.652778 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:41Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.668815 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:41Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.712969 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.713244 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.713336 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.713434 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.713515 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:41Z","lastTransitionTime":"2025-12-03T10:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.816440 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.816496 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.816508 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.816530 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.816547 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:41Z","lastTransitionTime":"2025-12-03T10:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.919593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.919648 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.919661 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.919683 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:41 crc kubenswrapper[4756]: I1203 10:53:41.919698 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:41Z","lastTransitionTime":"2025-12-03T10:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.023581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.023647 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.023659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.023679 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.023690 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:42Z","lastTransitionTime":"2025-12-03T10:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.127923 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.128029 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.128041 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.128062 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.128075 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:42Z","lastTransitionTime":"2025-12-03T10:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.218325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.218382 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.218397 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.218422 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.218435 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:42Z","lastTransitionTime":"2025-12-03T10:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:42 crc kubenswrapper[4756]: E1203 10:53:42.245392 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.250929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.251034 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.251051 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.251073 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.251089 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:42Z","lastTransitionTime":"2025-12-03T10:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:42 crc kubenswrapper[4756]: E1203 10:53:42.274475 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.280385 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.280453 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.280470 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.280495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.280513 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:42Z","lastTransitionTime":"2025-12-03T10:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:42 crc kubenswrapper[4756]: E1203 10:53:42.296705 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.301563 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.301611 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.301625 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.301647 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.301664 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:42Z","lastTransitionTime":"2025-12-03T10:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:42 crc kubenswrapper[4756]: E1203 10:53:42.320367 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.324917 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.324986 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.324997 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.325015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.325028 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:42Z","lastTransitionTime":"2025-12-03T10:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:42 crc kubenswrapper[4756]: E1203 10:53:42.337980 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: E1203 10:53:42.338096 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.339740 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.339853 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.339873 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.339919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.339941 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:42Z","lastTransitionTime":"2025-12-03T10:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.442376 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.442411 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.442419 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.442433 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.442444 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:42Z","lastTransitionTime":"2025-12-03T10:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.479879 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/1.log" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.483026 4756 scope.go:117] "RemoveContainer" containerID="1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396" Dec 03 10:53:42 crc kubenswrapper[4756]: E1203 10:53:42.483255 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.503217 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:41Z\\\",\\\"message\\\":\\\"_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 10:53:41.387138 6166 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1203 10:53:41.387134 6166 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 10:53:41.386925 6166 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.515037 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.530395 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.543253 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.544862 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.544901 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.544915 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.544931 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.544943 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:42Z","lastTransitionTime":"2025-12-03T10:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.555148 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.564529 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.575742 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.588194 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.599744 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.611591 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.622386 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.637198 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.647354 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.647395 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.647406 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.647423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.647436 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:42Z","lastTransitionTime":"2025-12-03T10:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.652377 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.663819 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.691269 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt"] Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.691770 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.693824 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.695664 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.703749 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.715429 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.727113 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.737259 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.746028 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.749551 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.749584 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.749593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.749611 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.749621 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:42Z","lastTransitionTime":"2025-12-03T10:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.758374 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.771758 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.784313 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.798840 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.811211 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.824564 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.824724 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xngpt\" (UID: \"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.824794 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xngpt\" (UID: \"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.824820 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xngpt\" (UID: \"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.824862 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmq9f\" (UniqueName: \"kubernetes.io/projected/32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c-kube-api-access-rmq9f\") pod \"ovnkube-control-plane-749d76644c-xngpt\" (UID: \"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.834897 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.847538 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.853503 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.853552 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.853571 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.853592 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.853601 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:42Z","lastTransitionTime":"2025-12-03T10:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.859681 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.875822 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:41Z\\\",\\\"message\\\":\\\"_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 10:53:41.387138 6166 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1203 10:53:41.387134 6166 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 10:53:41.386925 6166 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:42Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.926116 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xngpt\" (UID: \"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.926180 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xngpt\" (UID: \"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.926222 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xngpt\" (UID: \"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.926819 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xngpt\" (UID: \"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.926258 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmq9f\" (UniqueName: \"kubernetes.io/projected/32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c-kube-api-access-rmq9f\") pod \"ovnkube-control-plane-749d76644c-xngpt\" (UID: \"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.927040 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xngpt\" (UID: \"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.931736 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xngpt\" (UID: \"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.941968 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmq9f\" (UniqueName: \"kubernetes.io/projected/32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c-kube-api-access-rmq9f\") pod \"ovnkube-control-plane-749d76644c-xngpt\" (UID: \"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.956561 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.956628 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.956645 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.956667 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:42 crc kubenswrapper[4756]: I1203 10:53:42.956682 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:42Z","lastTransitionTime":"2025-12-03T10:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.003447 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" Dec 03 10:53:43 crc kubenswrapper[4756]: W1203 10:53:43.015113 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32ebbbed_12e9_4c2f_9c8a_4e2693a7e65c.slice/crio-ff785a0006cda6812c324a4b1766602a3311d715171b827bfbfca7d68b4b2bec WatchSource:0}: Error finding container ff785a0006cda6812c324a4b1766602a3311d715171b827bfbfca7d68b4b2bec: Status 404 returned error can't find the container with id ff785a0006cda6812c324a4b1766602a3311d715171b827bfbfca7d68b4b2bec Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.059656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.059683 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.059709 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.059723 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.059731 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:43Z","lastTransitionTime":"2025-12-03T10:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.162065 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.162105 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.162112 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.162128 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.162139 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:43Z","lastTransitionTime":"2025-12-03T10:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.233626 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.233726 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:43 crc kubenswrapper[4756]: E1203 10:53:43.233754 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.233802 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:43 crc kubenswrapper[4756]: E1203 10:53:43.233876 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:43 crc kubenswrapper[4756]: E1203 10:53:43.234039 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.264509 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.264554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.264571 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.264593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.264605 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:43Z","lastTransitionTime":"2025-12-03T10:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.366608 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.366693 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.366716 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.366747 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.366771 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:43Z","lastTransitionTime":"2025-12-03T10:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.469697 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.469739 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.469752 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.469771 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.469783 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:43Z","lastTransitionTime":"2025-12-03T10:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.493331 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" event={"ID":"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c","Type":"ContainerStarted","Data":"ff785a0006cda6812c324a4b1766602a3311d715171b827bfbfca7d68b4b2bec"} Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.573173 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.573238 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.573263 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.573294 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.573318 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:43Z","lastTransitionTime":"2025-12-03T10:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.676214 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.676255 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.676266 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.676281 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.676292 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:43Z","lastTransitionTime":"2025-12-03T10:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.778470 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.778517 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.778533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.778551 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.778565 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:43Z","lastTransitionTime":"2025-12-03T10:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.822173 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qvt7n"] Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.822683 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:43 crc kubenswrapper[4756]: E1203 10:53:43.822752 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.832851 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:43Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.845737 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:43Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.858735 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:43Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.881468 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.881338 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:41Z\\\",\\\"message\\\":\\\"_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 10:53:41.387138 6166 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1203 10:53:41.387134 6166 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 10:53:41.386925 6166 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:43Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.881506 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.881714 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.881747 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.881785 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:43Z","lastTransitionTime":"2025-12-03T10:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.894869 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:43Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.907087 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:43Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.918086 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:43Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.931650 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:43Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.940015 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs\") pod \"network-metrics-daemon-qvt7n\" (UID: \"cd88c3db-a819-4fb9-a952-30dc1b67c375\") " pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.940069 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k64p4\" (UniqueName: \"kubernetes.io/projected/cd88c3db-a819-4fb9-a952-30dc1b67c375-kube-api-access-k64p4\") pod \"network-metrics-daemon-qvt7n\" (UID: \"cd88c3db-a819-4fb9-a952-30dc1b67c375\") " pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.941916 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:43Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.950829 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:43Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.962015 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:43Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.978347 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:43Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.984534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.984579 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.984588 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.984606 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.984616 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:43Z","lastTransitionTime":"2025-12-03T10:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:43 crc kubenswrapper[4756]: I1203 10:53:43.994230 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:43Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.009569 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.026262 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.037557 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.040973 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs\") pod \"network-metrics-daemon-qvt7n\" (UID: \"cd88c3db-a819-4fb9-a952-30dc1b67c375\") " pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.041006 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k64p4\" (UniqueName: \"kubernetes.io/projected/cd88c3db-a819-4fb9-a952-30dc1b67c375-kube-api-access-k64p4\") pod \"network-metrics-daemon-qvt7n\" (UID: \"cd88c3db-a819-4fb9-a952-30dc1b67c375\") " pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:44 crc kubenswrapper[4756]: E1203 10:53:44.041324 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:53:44 crc kubenswrapper[4756]: E1203 10:53:44.041475 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs podName:cd88c3db-a819-4fb9-a952-30dc1b67c375 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:44.541442878 +0000 UTC m=+35.571444282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs") pod "network-metrics-daemon-qvt7n" (UID: "cd88c3db-a819-4fb9-a952-30dc1b67c375") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.057526 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k64p4\" (UniqueName: \"kubernetes.io/projected/cd88c3db-a819-4fb9-a952-30dc1b67c375-kube-api-access-k64p4\") pod \"network-metrics-daemon-qvt7n\" (UID: \"cd88c3db-a819-4fb9-a952-30dc1b67c375\") " pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.086786 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.086812 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.086820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.086831 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.086840 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:44Z","lastTransitionTime":"2025-12-03T10:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.189307 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.189348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.189360 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.189378 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.189389 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:44Z","lastTransitionTime":"2025-12-03T10:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.296673 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.296731 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.296743 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.296762 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.296775 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:44Z","lastTransitionTime":"2025-12-03T10:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.399744 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.399795 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.399809 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.399826 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.399840 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:44Z","lastTransitionTime":"2025-12-03T10:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.497935 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" event={"ID":"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c","Type":"ContainerStarted","Data":"cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a"} Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.497996 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" event={"ID":"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c","Type":"ContainerStarted","Data":"187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7"} Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.502219 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.502252 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.502260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.502272 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.502283 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:44Z","lastTransitionTime":"2025-12-03T10:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.510773 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.520672 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.530372 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.544637 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.545946 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs\") pod \"network-metrics-daemon-qvt7n\" (UID: \"cd88c3db-a819-4fb9-a952-30dc1b67c375\") " pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:44 crc kubenswrapper[4756]: E1203 10:53:44.546080 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:53:44 crc kubenswrapper[4756]: E1203 10:53:44.546150 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs podName:cd88c3db-a819-4fb9-a952-30dc1b67c375 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:45.546133767 +0000 UTC m=+36.576135011 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs") pod "network-metrics-daemon-qvt7n" (UID: "cd88c3db-a819-4fb9-a952-30dc1b67c375") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.555579 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.571277 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.585707 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.596749 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.604902 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.605109 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.605215 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.605369 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.605459 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:44Z","lastTransitionTime":"2025-12-03T10:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.612882 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.625037 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.636879 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.656487 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:41Z\\\",\\\"message\\\":\\\"_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 10:53:41.387138 6166 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1203 10:53:41.387134 6166 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 10:53:41.386925 6166 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.669762 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.681191 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.700406 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.707124 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.707159 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.707169 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.707183 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.707192 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:44Z","lastTransitionTime":"2025-12-03T10:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.713343 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.809423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.809720 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.809796 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.809882 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.809962 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:44Z","lastTransitionTime":"2025-12-03T10:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.912628 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.912688 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.912699 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.912714 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.912725 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:44Z","lastTransitionTime":"2025-12-03T10:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.950094 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.950182 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:44 crc kubenswrapper[4756]: I1203 10:53:44.950208 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:44 crc kubenswrapper[4756]: E1203 10:53:44.950323 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:53:44 crc kubenswrapper[4756]: E1203 10:53:44.950391 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:54:00.950352794 +0000 UTC m=+51.980354078 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:53:44 crc kubenswrapper[4756]: E1203 10:53:44.950441 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:54:00.950425266 +0000 UTC m=+51.980426540 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:53:44 crc kubenswrapper[4756]: E1203 10:53:44.950566 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:53:44 crc kubenswrapper[4756]: E1203 10:53:44.950636 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:54:00.950623253 +0000 UTC m=+51.980624527 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.016424 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.016469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.016486 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.016509 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.016526 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:45Z","lastTransitionTime":"2025-12-03T10:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.051197 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.051263 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:45 crc kubenswrapper[4756]: E1203 10:53:45.051439 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:53:45 crc kubenswrapper[4756]: E1203 10:53:45.051474 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:53:45 crc kubenswrapper[4756]: E1203 10:53:45.051486 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:45 crc kubenswrapper[4756]: E1203 10:53:45.051525 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:53:45 crc kubenswrapper[4756]: E1203 10:53:45.051600 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:53:45 crc kubenswrapper[4756]: E1203 10:53:45.051628 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:45 crc kubenswrapper[4756]: E1203 10:53:45.051600 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 10:54:01.051529187 +0000 UTC m=+52.081530431 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:45 crc kubenswrapper[4756]: E1203 10:53:45.051733 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 10:54:01.051705103 +0000 UTC m=+52.081706397 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.119895 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.120000 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.120024 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.120051 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.120072 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:45Z","lastTransitionTime":"2025-12-03T10:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.223229 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.223308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.223332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.223362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.223389 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:45Z","lastTransitionTime":"2025-12-03T10:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.233432 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.233488 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.233488 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:45 crc kubenswrapper[4756]: E1203 10:53:45.233621 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:45 crc kubenswrapper[4756]: E1203 10:53:45.233733 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.233770 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:45 crc kubenswrapper[4756]: E1203 10:53:45.233864 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:45 crc kubenswrapper[4756]: E1203 10:53:45.233975 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.326183 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.326224 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.326236 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.326254 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.326266 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:45Z","lastTransitionTime":"2025-12-03T10:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.428351 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.428390 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.428399 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.428412 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.428421 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:45Z","lastTransitionTime":"2025-12-03T10:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.530404 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.530672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.530746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.530811 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.530881 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:45Z","lastTransitionTime":"2025-12-03T10:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.557028 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs\") pod \"network-metrics-daemon-qvt7n\" (UID: \"cd88c3db-a819-4fb9-a952-30dc1b67c375\") " pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:45 crc kubenswrapper[4756]: E1203 10:53:45.557155 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:53:45 crc kubenswrapper[4756]: E1203 10:53:45.557200 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs podName:cd88c3db-a819-4fb9-a952-30dc1b67c375 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:47.557186467 +0000 UTC m=+38.587187701 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs") pod "network-metrics-daemon-qvt7n" (UID: "cd88c3db-a819-4fb9-a952-30dc1b67c375") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.633734 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.633783 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.633797 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.633816 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.633829 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:45Z","lastTransitionTime":"2025-12-03T10:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.736755 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.736810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.736827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.736846 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.736861 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:45Z","lastTransitionTime":"2025-12-03T10:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.839812 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.839854 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.839863 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.839879 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.839892 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:45Z","lastTransitionTime":"2025-12-03T10:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.942435 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.942499 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.942518 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.942549 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:45 crc kubenswrapper[4756]: I1203 10:53:45.942568 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:45Z","lastTransitionTime":"2025-12-03T10:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.044907 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.044979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.044994 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.045017 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.045028 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:46Z","lastTransitionTime":"2025-12-03T10:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.147646 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.147680 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.147688 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.147701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.147710 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:46Z","lastTransitionTime":"2025-12-03T10:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.249497 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.249548 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.249559 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.249575 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.249589 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:46Z","lastTransitionTime":"2025-12-03T10:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.352719 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.352762 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.352772 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.352788 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.352801 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:46Z","lastTransitionTime":"2025-12-03T10:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.455416 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.455460 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.455472 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.455488 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.455499 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:46Z","lastTransitionTime":"2025-12-03T10:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.557534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.557588 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.557605 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.557627 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.557643 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:46Z","lastTransitionTime":"2025-12-03T10:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.660319 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.661405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.661567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.661707 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.661828 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:46Z","lastTransitionTime":"2025-12-03T10:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.764414 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.764712 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.764793 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.764882 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.764979 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:46Z","lastTransitionTime":"2025-12-03T10:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.868167 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.868201 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.868212 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.868229 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.868241 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:46Z","lastTransitionTime":"2025-12-03T10:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.970905 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.971392 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.971568 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.971752 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:46 crc kubenswrapper[4756]: I1203 10:53:46.972093 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:46Z","lastTransitionTime":"2025-12-03T10:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.074948 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.075006 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.075015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.075030 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.075039 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:47Z","lastTransitionTime":"2025-12-03T10:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.177133 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.177167 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.177179 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.177195 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.177207 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:47Z","lastTransitionTime":"2025-12-03T10:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.233748 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:47 crc kubenswrapper[4756]: E1203 10:53:47.234202 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.233812 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:47 crc kubenswrapper[4756]: E1203 10:53:47.234411 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.233748 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:47 crc kubenswrapper[4756]: E1203 10:53:47.234682 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.233863 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:47 crc kubenswrapper[4756]: E1203 10:53:47.234913 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.279594 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.279658 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.279677 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.279698 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.279714 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:47Z","lastTransitionTime":"2025-12-03T10:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.382028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.382064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.382075 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.382091 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.382100 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:47Z","lastTransitionTime":"2025-12-03T10:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.484701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.484758 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.484772 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.484795 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.484809 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:47Z","lastTransitionTime":"2025-12-03T10:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.575312 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs\") pod \"network-metrics-daemon-qvt7n\" (UID: \"cd88c3db-a819-4fb9-a952-30dc1b67c375\") " pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:47 crc kubenswrapper[4756]: E1203 10:53:47.575466 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:53:47 crc kubenswrapper[4756]: E1203 10:53:47.575542 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs podName:cd88c3db-a819-4fb9-a952-30dc1b67c375 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:51.575522279 +0000 UTC m=+42.605523533 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs") pod "network-metrics-daemon-qvt7n" (UID: "cd88c3db-a819-4fb9-a952-30dc1b67c375") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.587824 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.587871 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.587894 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.587918 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.587939 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:47Z","lastTransitionTime":"2025-12-03T10:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.690783 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.690851 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.690864 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.690889 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.690908 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:47Z","lastTransitionTime":"2025-12-03T10:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.794310 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.794376 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.794392 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.794416 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.794430 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:47Z","lastTransitionTime":"2025-12-03T10:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.897497 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.897537 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.897551 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.897569 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:47 crc kubenswrapper[4756]: I1203 10:53:47.897580 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:47Z","lastTransitionTime":"2025-12-03T10:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.001910 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.001999 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.002023 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.002050 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.002073 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:48Z","lastTransitionTime":"2025-12-03T10:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.106174 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.106234 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.106253 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.106281 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.106302 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:48Z","lastTransitionTime":"2025-12-03T10:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.209620 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.209686 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.209703 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.209731 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.209749 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:48Z","lastTransitionTime":"2025-12-03T10:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.312785 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.312842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.312850 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.312868 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.312877 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:48Z","lastTransitionTime":"2025-12-03T10:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.417448 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.417541 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.417574 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.417612 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.417633 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:48Z","lastTransitionTime":"2025-12-03T10:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.520503 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.520575 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.520598 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.520624 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.520644 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:48Z","lastTransitionTime":"2025-12-03T10:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.623755 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.623842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.623865 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.623899 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.623928 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:48Z","lastTransitionTime":"2025-12-03T10:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.728017 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.728103 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.728125 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.728156 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.728176 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:48Z","lastTransitionTime":"2025-12-03T10:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.830678 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.830763 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.830782 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.830810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.830832 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:48Z","lastTransitionTime":"2025-12-03T10:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.933098 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.933164 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.933187 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.933220 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:48 crc kubenswrapper[4756]: I1203 10:53:48.933245 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:48Z","lastTransitionTime":"2025-12-03T10:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.035285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.035329 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.035341 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.035358 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.035372 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:49Z","lastTransitionTime":"2025-12-03T10:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.138241 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.138288 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.138299 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.138315 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.138325 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:49Z","lastTransitionTime":"2025-12-03T10:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.233483 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.233526 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.233563 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.233526 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:49 crc kubenswrapper[4756]: E1203 10:53:49.233659 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:49 crc kubenswrapper[4756]: E1203 10:53:49.233908 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:53:49 crc kubenswrapper[4756]: E1203 10:53:49.234014 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:49 crc kubenswrapper[4756]: E1203 10:53:49.234074 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.240413 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.240467 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.240485 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.240505 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.240519 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:49Z","lastTransitionTime":"2025-12-03T10:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.245870 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.258517 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.267727 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.279435 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.294285 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.312240 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.327371 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.339121 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.343554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.343619 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.343641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.343669 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.343689 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:49Z","lastTransitionTime":"2025-12-03T10:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.354052 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.367217 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.379667 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.397125 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.414346 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:41Z\\\",\\\"message\\\":\\\"_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 10:53:41.387138 6166 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1203 10:53:41.387134 6166 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 10:53:41.386925 6166 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.424235 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.435035 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.446577 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.446637 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.446660 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.446690 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.446712 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:49Z","lastTransitionTime":"2025-12-03T10:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.448406 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.548828 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.548872 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.548888 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.548910 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.548927 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:49Z","lastTransitionTime":"2025-12-03T10:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.650651 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.650700 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.650717 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.650739 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.650756 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:49Z","lastTransitionTime":"2025-12-03T10:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.754162 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.755003 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.755018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.755032 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.755040 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:49Z","lastTransitionTime":"2025-12-03T10:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.858445 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.858510 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.858534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.858563 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.858587 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:49Z","lastTransitionTime":"2025-12-03T10:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.960872 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.960925 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.960942 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.960991 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:49 crc kubenswrapper[4756]: I1203 10:53:49.961010 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:49Z","lastTransitionTime":"2025-12-03T10:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.063875 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.064010 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.064029 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.064055 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.064072 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:50Z","lastTransitionTime":"2025-12-03T10:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.167843 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.168023 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.168051 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.168080 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.168097 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:50Z","lastTransitionTime":"2025-12-03T10:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.270260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.270331 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.270345 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.270405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.270419 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:50Z","lastTransitionTime":"2025-12-03T10:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.373560 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.373602 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.373611 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.373629 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.373640 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:50Z","lastTransitionTime":"2025-12-03T10:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.477622 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.477686 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.477704 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.477727 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.477746 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:50Z","lastTransitionTime":"2025-12-03T10:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.581864 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.581933 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.581979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.582009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.582035 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:50Z","lastTransitionTime":"2025-12-03T10:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.684196 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.684243 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.684255 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.684276 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.684293 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:50Z","lastTransitionTime":"2025-12-03T10:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.787858 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.787913 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.787931 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.787995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.788017 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:50Z","lastTransitionTime":"2025-12-03T10:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.892167 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.892240 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.892252 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.892270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.892283 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:50Z","lastTransitionTime":"2025-12-03T10:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.995940 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.996010 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.996020 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.996043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:50 crc kubenswrapper[4756]: I1203 10:53:50.996056 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:50Z","lastTransitionTime":"2025-12-03T10:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.099128 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.099180 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.099194 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.099218 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.099234 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:51Z","lastTransitionTime":"2025-12-03T10:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.203349 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.203401 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.203419 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.203446 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.203464 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:51Z","lastTransitionTime":"2025-12-03T10:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.233421 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.233538 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.233453 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.233559 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:51 crc kubenswrapper[4756]: E1203 10:53:51.233914 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:51 crc kubenswrapper[4756]: E1203 10:53:51.233985 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:51 crc kubenswrapper[4756]: E1203 10:53:51.234071 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:53:51 crc kubenswrapper[4756]: E1203 10:53:51.234197 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.306295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.306362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.306380 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.306410 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.306429 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:51Z","lastTransitionTime":"2025-12-03T10:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.409944 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.410043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.410066 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.410094 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.410146 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:51Z","lastTransitionTime":"2025-12-03T10:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.513902 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.514051 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.514074 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.514108 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.514128 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:51Z","lastTransitionTime":"2025-12-03T10:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.624102 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs\") pod \"network-metrics-daemon-qvt7n\" (UID: \"cd88c3db-a819-4fb9-a952-30dc1b67c375\") " pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:51 crc kubenswrapper[4756]: E1203 10:53:51.624311 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:53:51 crc kubenswrapper[4756]: E1203 10:53:51.624438 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs podName:cd88c3db-a819-4fb9-a952-30dc1b67c375 nodeName:}" failed. No retries permitted until 2025-12-03 10:53:59.624393921 +0000 UTC m=+50.654395205 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs") pod "network-metrics-daemon-qvt7n" (UID: "cd88c3db-a819-4fb9-a952-30dc1b67c375") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.625581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.625616 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.625628 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.625647 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.625660 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:51Z","lastTransitionTime":"2025-12-03T10:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.728350 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.728402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.728433 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.728457 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.728477 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:51Z","lastTransitionTime":"2025-12-03T10:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.830603 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.830670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.830690 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.830715 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.830733 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:51Z","lastTransitionTime":"2025-12-03T10:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.934102 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.934155 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.934168 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.934188 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:51 crc kubenswrapper[4756]: I1203 10:53:51.934200 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:51Z","lastTransitionTime":"2025-12-03T10:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.037657 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.037727 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.037744 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.037770 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.037786 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.140844 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.140912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.140933 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.140985 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.141004 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.242915 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.242980 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.242993 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.243010 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.243022 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.345673 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.345711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.345724 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.345741 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.345754 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.448665 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.448729 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.448746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.448770 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.448787 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.551882 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.551922 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.551932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.551944 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.551972 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.654326 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.654398 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.654413 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.654432 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.654446 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.707670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.707725 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.707739 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.707757 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.707769 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: E1203 10:53:52.721708 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:52Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.726404 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.726439 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.726449 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.726464 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.726472 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: E1203 10:53:52.740464 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:52Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.743905 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.743970 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.743980 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.743994 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.744004 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: E1203 10:53:52.755892 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:52Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.759588 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.759619 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.759627 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.759643 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.759652 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: E1203 10:53:52.773460 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:52Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.777710 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.777755 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.777767 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.777783 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.777794 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: E1203 10:53:52.790384 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:52Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:52 crc kubenswrapper[4756]: E1203 10:53:52.790530 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.792437 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.792476 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.792485 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.792504 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.792516 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.894813 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.894861 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.894873 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.894893 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.894907 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.997734 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.997806 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.997822 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.997841 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:52 crc kubenswrapper[4756]: I1203 10:53:52.997856 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:52Z","lastTransitionTime":"2025-12-03T10:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.100672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.100714 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.100726 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.100750 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.100765 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:53Z","lastTransitionTime":"2025-12-03T10:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.203629 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.203745 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.203770 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.203791 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.203802 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:53Z","lastTransitionTime":"2025-12-03T10:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.233256 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.233364 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:53 crc kubenswrapper[4756]: E1203 10:53:53.233407 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.233442 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.233459 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:53 crc kubenswrapper[4756]: E1203 10:53:53.233586 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:53:53 crc kubenswrapper[4756]: E1203 10:53:53.233672 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:53 crc kubenswrapper[4756]: E1203 10:53:53.233769 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.306150 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.306213 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.306260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.306291 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.306309 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:53Z","lastTransitionTime":"2025-12-03T10:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.409197 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.409246 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.409255 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.409273 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.409283 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:53Z","lastTransitionTime":"2025-12-03T10:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.513308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.513376 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.513390 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.513411 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.513424 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:53Z","lastTransitionTime":"2025-12-03T10:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.617273 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.617332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.617343 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.617361 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.617372 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:53Z","lastTransitionTime":"2025-12-03T10:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.721277 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.721406 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.721430 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.721464 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.721486 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:53Z","lastTransitionTime":"2025-12-03T10:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.824670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.824750 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.824769 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.824799 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.824818 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:53Z","lastTransitionTime":"2025-12-03T10:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.928942 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.929028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.929046 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.929076 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:53 crc kubenswrapper[4756]: I1203 10:53:53.929099 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:53Z","lastTransitionTime":"2025-12-03T10:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.032388 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.032480 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.032506 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.032540 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.032566 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:54Z","lastTransitionTime":"2025-12-03T10:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.136238 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.136314 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.136332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.136362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.136381 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:54Z","lastTransitionTime":"2025-12-03T10:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.240104 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.240169 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.240185 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.240211 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.240232 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:54Z","lastTransitionTime":"2025-12-03T10:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.343467 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.343557 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.343574 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.343594 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.343613 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:54Z","lastTransitionTime":"2025-12-03T10:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.445934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.446008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.446023 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.446087 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.446102 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:54Z","lastTransitionTime":"2025-12-03T10:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.548301 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.548381 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.548401 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.548432 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.548453 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:54Z","lastTransitionTime":"2025-12-03T10:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.650710 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.650748 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.650758 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.650773 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.650784 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:54Z","lastTransitionTime":"2025-12-03T10:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.754396 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.754510 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.754542 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.754584 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.754611 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:54Z","lastTransitionTime":"2025-12-03T10:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.858038 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.858099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.858114 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.858140 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.858153 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:54Z","lastTransitionTime":"2025-12-03T10:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.964077 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.964131 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.964141 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.964161 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:54 crc kubenswrapper[4756]: I1203 10:53:54.964175 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:54Z","lastTransitionTime":"2025-12-03T10:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.068779 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.068861 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.068881 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.068939 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.069010 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:55Z","lastTransitionTime":"2025-12-03T10:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.172403 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.172454 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.172467 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.172489 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.172505 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:55Z","lastTransitionTime":"2025-12-03T10:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.233521 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.233670 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.233729 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:55 crc kubenswrapper[4756]: E1203 10:53:55.233818 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.234002 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:55 crc kubenswrapper[4756]: E1203 10:53:55.234022 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:53:55 crc kubenswrapper[4756]: E1203 10:53:55.234124 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:55 crc kubenswrapper[4756]: E1203 10:53:55.234268 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.274973 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.275018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.275033 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.275051 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.275065 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:55Z","lastTransitionTime":"2025-12-03T10:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.378116 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.378413 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.378480 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.378573 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.378660 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:55Z","lastTransitionTime":"2025-12-03T10:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.481730 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.481800 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.481811 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.481840 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.481852 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:55Z","lastTransitionTime":"2025-12-03T10:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.584551 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.584595 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.584607 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.584622 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.584632 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:55Z","lastTransitionTime":"2025-12-03T10:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.687085 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.687139 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.687152 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.687172 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.687186 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:55Z","lastTransitionTime":"2025-12-03T10:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.789905 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.789986 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.790001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.790027 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.790047 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:55Z","lastTransitionTime":"2025-12-03T10:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.893115 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.893202 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.893219 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.893239 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.893252 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:55Z","lastTransitionTime":"2025-12-03T10:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.996255 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.996313 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.996329 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.996351 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:55 crc kubenswrapper[4756]: I1203 10:53:55.996365 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:55Z","lastTransitionTime":"2025-12-03T10:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.099096 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.099141 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.099152 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.099167 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.099178 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:56Z","lastTransitionTime":"2025-12-03T10:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.202098 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.202138 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.202151 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.202177 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.202196 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:56Z","lastTransitionTime":"2025-12-03T10:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.305285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.305311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.305320 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.305332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.305341 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:56Z","lastTransitionTime":"2025-12-03T10:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.409141 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.409185 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.409200 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.409224 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.409239 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:56Z","lastTransitionTime":"2025-12-03T10:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.511819 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.511879 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.511895 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.511921 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.511937 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:56Z","lastTransitionTime":"2025-12-03T10:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.615298 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.615385 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.615401 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.615451 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.615464 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:56Z","lastTransitionTime":"2025-12-03T10:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.717989 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.718033 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.718043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.718057 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.718067 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:56Z","lastTransitionTime":"2025-12-03T10:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.820108 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.820448 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.820561 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.820654 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.820739 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:56Z","lastTransitionTime":"2025-12-03T10:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.923851 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.923913 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.923925 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.923980 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:56 crc kubenswrapper[4756]: I1203 10:53:56.924001 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:56Z","lastTransitionTime":"2025-12-03T10:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.027347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.027419 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.027430 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.027446 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.027457 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:57Z","lastTransitionTime":"2025-12-03T10:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.130222 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.130274 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.130286 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.130311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.130322 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:57Z","lastTransitionTime":"2025-12-03T10:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.233274 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.233274 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.233275 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.233309 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:57 crc kubenswrapper[4756]: E1203 10:53:57.233499 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:53:57 crc kubenswrapper[4756]: E1203 10:53:57.233701 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:57 crc kubenswrapper[4756]: E1203 10:53:57.233832 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:57 crc kubenswrapper[4756]: E1203 10:53:57.233926 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.234635 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.234705 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.234722 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.234741 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.234756 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:57Z","lastTransitionTime":"2025-12-03T10:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.337912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.337984 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.337996 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.338014 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.338028 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:57Z","lastTransitionTime":"2025-12-03T10:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.441477 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.441526 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.441538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.441555 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.441566 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:57Z","lastTransitionTime":"2025-12-03T10:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.544532 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.544626 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.544648 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.544680 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.544703 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:57Z","lastTransitionTime":"2025-12-03T10:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.648198 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.648265 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.648290 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.648316 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.648335 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:57Z","lastTransitionTime":"2025-12-03T10:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.751269 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.751319 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.751334 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.751352 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.751370 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:57Z","lastTransitionTime":"2025-12-03T10:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.853797 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.853865 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.853885 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.853910 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.853930 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:57Z","lastTransitionTime":"2025-12-03T10:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.956143 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.956206 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.956216 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.956231 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:57 crc kubenswrapper[4756]: I1203 10:53:57.956242 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:57Z","lastTransitionTime":"2025-12-03T10:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.058679 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.058734 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.058753 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.058779 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.058791 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:58Z","lastTransitionTime":"2025-12-03T10:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.162182 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.162245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.162254 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.162269 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.162278 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:58Z","lastTransitionTime":"2025-12-03T10:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.234913 4756 scope.go:117] "RemoveContainer" containerID="1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.272592 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.272648 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.272666 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.272689 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.272707 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:58Z","lastTransitionTime":"2025-12-03T10:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.376599 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.376908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.377004 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.377088 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.377159 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:58Z","lastTransitionTime":"2025-12-03T10:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.480398 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.480443 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.480457 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.480476 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.480497 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:58Z","lastTransitionTime":"2025-12-03T10:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.550479 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/1.log" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.556323 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerStarted","Data":"1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec"} Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.556912 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.579451 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.586303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.586364 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.586379 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.586402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.586415 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:58Z","lastTransitionTime":"2025-12-03T10:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.595574 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.613235 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.630137 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.649279 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.667082 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.689077 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.689167 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.689183 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.689209 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.689227 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:58Z","lastTransitionTime":"2025-12-03T10:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.693096 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.733408 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:41Z\\\",\\\"message\\\":\\\"_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 10:53:41.387138 6166 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1203 10:53:41.387134 6166 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 10:53:41.386925 6166 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.746019 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.758835 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.773255 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.787871 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.792284 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.792333 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.792350 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.792368 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.792380 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:58Z","lastTransitionTime":"2025-12-03T10:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.803519 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.814185 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.824106 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.837175 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:58Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.902148 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.902199 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.902211 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.902230 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:58 crc kubenswrapper[4756]: I1203 10:53:58.902246 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:58Z","lastTransitionTime":"2025-12-03T10:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.004550 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.004594 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.004604 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.004620 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.004631 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:59Z","lastTransitionTime":"2025-12-03T10:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.108338 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.108381 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.108389 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.108403 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.108413 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:59Z","lastTransitionTime":"2025-12-03T10:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.211211 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.211465 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.211476 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.211491 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.211500 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:59Z","lastTransitionTime":"2025-12-03T10:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.233016 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.233186 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:53:59 crc kubenswrapper[4756]: E1203 10:53:59.233222 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.233240 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.233254 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:53:59 crc kubenswrapper[4756]: E1203 10:53:59.233431 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:53:59 crc kubenswrapper[4756]: E1203 10:53:59.233570 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:53:59 crc kubenswrapper[4756]: E1203 10:53:59.233665 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.252529 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.269650 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.285119 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.300818 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.314261 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.314336 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.314355 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.314386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.314406 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:59Z","lastTransitionTime":"2025-12-03T10:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.318974 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.348432 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:41Z\\\",\\\"message\\\":\\\"_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 10:53:41.387138 6166 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1203 10:53:41.387134 6166 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 10:53:41.386925 6166 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.360897 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.379284 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.397625 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.415346 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.416371 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.416418 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.416433 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.416456 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.416472 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:59Z","lastTransitionTime":"2025-12-03T10:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.429863 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.443588 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.457745 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.472265 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.485741 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.497974 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.518607 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.518651 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.518663 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.518678 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.519012 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:59Z","lastTransitionTime":"2025-12-03T10:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.573400 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/2.log" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.574236 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/1.log" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.577170 4756 generic.go:334] "Generic (PLEG): container finished" podID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerID="1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec" exitCode=1 Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.577291 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerDied","Data":"1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec"} Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.577360 4756 scope.go:117] "RemoveContainer" containerID="1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.578216 4756 scope.go:117] "RemoveContainer" containerID="1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec" Dec 03 10:53:59 crc kubenswrapper[4756]: E1203 10:53:59.578439 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.593041 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.607716 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.621753 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.622093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.622164 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.622183 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.622212 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.622232 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:59Z","lastTransitionTime":"2025-12-03T10:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.633257 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.644894 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.658806 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.672830 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.687595 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.703180 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.710375 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs\") pod \"network-metrics-daemon-qvt7n\" (UID: \"cd88c3db-a819-4fb9-a952-30dc1b67c375\") " pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:53:59 crc kubenswrapper[4756]: E1203 10:53:59.710530 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:53:59 crc kubenswrapper[4756]: E1203 10:53:59.710588 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs podName:cd88c3db-a819-4fb9-a952-30dc1b67c375 nodeName:}" failed. No retries permitted until 2025-12-03 10:54:15.71057394 +0000 UTC m=+66.740575184 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs") pod "network-metrics-daemon-qvt7n" (UID: "cd88c3db-a819-4fb9-a952-30dc1b67c375") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.719614 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.725314 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.725374 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.725387 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.725406 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.725418 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:59Z","lastTransitionTime":"2025-12-03T10:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.732800 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.743733 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.757143 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.770647 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.815149 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a522bf8e7a4f4eef992022cfd76d7a457cda14a89727ec01f456e3739953396\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:41Z\\\",\\\"message\\\":\\\"_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1203 10:53:41.387138 6166 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1203 10:53:41.387134 6166 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-marketplace]} name:Service_openshift-marketplace/redhat-marketplace_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.140:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97b6e7b0-06ca-455e-8259-06895040cb0c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1203 10:53:41.386925 6166 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:59Z\\\",\\\"message\\\":\\\"-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1203 10:53:59.141875 6384 ovnkube.go:599] Stopped ovnkube\\\\nI1203 10:53:59.141885 6384 services_controller.go:452] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141926 6384 services_controller.go:453] Built service openshift-operator-lifecycle-manager/packageserver-service template LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141946 6384 services_controller.go:454] Service openshift-operator-lifecycle-manager/packageserver-service for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1203 10:53:59.142002 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 10:53:59.142067 6384 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.828052 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.828090 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.828099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.828114 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.828127 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:59Z","lastTransitionTime":"2025-12-03T10:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.834314 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:53:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.931532 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.931570 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.931579 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.931595 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:53:59 crc kubenswrapper[4756]: I1203 10:53:59.931607 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:53:59Z","lastTransitionTime":"2025-12-03T10:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.035258 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.035311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.035324 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.035381 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.035398 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:00Z","lastTransitionTime":"2025-12-03T10:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.138498 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.138533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.138544 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.138559 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.138569 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:00Z","lastTransitionTime":"2025-12-03T10:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.242218 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.242289 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.242308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.242337 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.242358 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:00Z","lastTransitionTime":"2025-12-03T10:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.349064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.349116 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.349143 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.349162 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.349173 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:00Z","lastTransitionTime":"2025-12-03T10:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.452092 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.452138 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.452150 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.452165 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.452175 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:00Z","lastTransitionTime":"2025-12-03T10:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.555318 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.555364 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.555379 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.555396 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.555408 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:00Z","lastTransitionTime":"2025-12-03T10:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.584331 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/2.log" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.588830 4756 scope.go:117] "RemoveContainer" containerID="1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec" Dec 03 10:54:00 crc kubenswrapper[4756]: E1203 10:54:00.589207 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.604240 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.618777 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.640343 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:59Z\\\",\\\"message\\\":\\\"-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1203 10:53:59.141875 6384 ovnkube.go:599] Stopped ovnkube\\\\nI1203 10:53:59.141885 6384 services_controller.go:452] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141926 6384 services_controller.go:453] Built service openshift-operator-lifecycle-manager/packageserver-service template LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141946 6384 services_controller.go:454] Service openshift-operator-lifecycle-manager/packageserver-service for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1203 10:53:59.142002 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 10:53:59.142067 6384 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.653102 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.658318 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.658389 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.658405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.658431 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.658446 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:00Z","lastTransitionTime":"2025-12-03T10:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.674302 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.698543 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.716966 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.733017 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.748968 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.762526 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.762635 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.762661 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.762687 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.762705 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:00Z","lastTransitionTime":"2025-12-03T10:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.764453 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.780362 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.793295 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.807537 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.821229 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.832391 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.845834 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:00Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.865971 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.866048 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.866063 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.866091 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.866105 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:00Z","lastTransitionTime":"2025-12-03T10:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.969640 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.969720 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.969744 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.969773 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:00 crc kubenswrapper[4756]: I1203 10:54:00.969795 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:00Z","lastTransitionTime":"2025-12-03T10:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.027288 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.027400 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.027427 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.027512 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.027582 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.027559 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:54:33.027506332 +0000 UTC m=+84.057507626 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.027750 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:54:33.027721089 +0000 UTC m=+84.057722473 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.027791 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:54:33.02776872 +0000 UTC m=+84.057770184 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.073207 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.073271 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.073285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.073302 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.074851 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:01Z","lastTransitionTime":"2025-12-03T10:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.128825 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.128931 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.129189 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.129239 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.129253 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.129253 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.129298 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.129327 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.129337 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 10:54:33.129314794 +0000 UTC m=+84.159316038 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.129437 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 10:54:33.129400907 +0000 UTC m=+84.159402351 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.177836 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.177887 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.177899 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.177918 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.177933 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:01Z","lastTransitionTime":"2025-12-03T10:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.233220 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.233262 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.233366 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.233357 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.233453 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.233220 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.233528 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:01 crc kubenswrapper[4756]: E1203 10:54:01.233594 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.281138 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.281187 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.281197 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.281216 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.281228 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:01Z","lastTransitionTime":"2025-12-03T10:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.383750 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.384018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.384079 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.384171 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.384233 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:01Z","lastTransitionTime":"2025-12-03T10:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.486798 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.486849 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.486861 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.486879 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.486893 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:01Z","lastTransitionTime":"2025-12-03T10:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.590386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.590657 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.590749 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.590842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.590926 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:01Z","lastTransitionTime":"2025-12-03T10:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.693482 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.693589 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.693609 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.693641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.693662 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:01Z","lastTransitionTime":"2025-12-03T10:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.797028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.797074 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.797086 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.797109 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.797123 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:01Z","lastTransitionTime":"2025-12-03T10:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.900021 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.900137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.900165 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.900195 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:01 crc kubenswrapper[4756]: I1203 10:54:01.900213 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:01Z","lastTransitionTime":"2025-12-03T10:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.003785 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.003829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.003839 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.003858 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.003869 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:02Z","lastTransitionTime":"2025-12-03T10:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.106477 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.106781 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.106845 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.106930 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.107019 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:02Z","lastTransitionTime":"2025-12-03T10:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.209392 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.210041 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.210078 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.210099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.210118 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:02Z","lastTransitionTime":"2025-12-03T10:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.313107 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.313190 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.313213 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.313244 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.313269 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:02Z","lastTransitionTime":"2025-12-03T10:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.416328 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.416377 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.416386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.416405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.416418 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:02Z","lastTransitionTime":"2025-12-03T10:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.519290 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.519735 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.519874 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.520047 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.520186 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:02Z","lastTransitionTime":"2025-12-03T10:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.623152 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.623206 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.623219 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.623238 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.623251 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:02Z","lastTransitionTime":"2025-12-03T10:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.726346 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.726774 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.726884 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.727034 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.727122 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:02Z","lastTransitionTime":"2025-12-03T10:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.830385 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.830859 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.831187 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.831402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.831574 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:02Z","lastTransitionTime":"2025-12-03T10:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.935170 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.935250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.935273 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.935301 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.935322 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:02Z","lastTransitionTime":"2025-12-03T10:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.958008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.958079 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.958096 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.958123 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.958144 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:02Z","lastTransitionTime":"2025-12-03T10:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:02 crc kubenswrapper[4756]: E1203 10:54:02.972380 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:02Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.976629 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.976686 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.976703 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.976764 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.976782 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:02Z","lastTransitionTime":"2025-12-03T10:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:02 crc kubenswrapper[4756]: E1203 10:54:02.993524 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:02Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.999047 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.999103 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.999115 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.999136 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:02 crc kubenswrapper[4756]: I1203 10:54:02.999151 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:02Z","lastTransitionTime":"2025-12-03T10:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:03 crc kubenswrapper[4756]: E1203 10:54:03.012591 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.017129 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.017194 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.017208 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.017232 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.017250 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:03Z","lastTransitionTime":"2025-12-03T10:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:03 crc kubenswrapper[4756]: E1203 10:54:03.039129 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.045649 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.045706 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.045719 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.045746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.045759 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:03Z","lastTransitionTime":"2025-12-03T10:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:03 crc kubenswrapper[4756]: E1203 10:54:03.064515 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: E1203 10:54:03.064777 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.067424 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.067467 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.067481 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.067504 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.067518 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:03Z","lastTransitionTime":"2025-12-03T10:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.170614 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.170702 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.170735 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.170769 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.170791 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:03Z","lastTransitionTime":"2025-12-03T10:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.233473 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.233557 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.233474 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.233474 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:03 crc kubenswrapper[4756]: E1203 10:54:03.233684 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:03 crc kubenswrapper[4756]: E1203 10:54:03.233818 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:03 crc kubenswrapper[4756]: E1203 10:54:03.233974 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:03 crc kubenswrapper[4756]: E1203 10:54:03.234109 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.273983 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.274049 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.274065 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.274084 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.274096 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:03Z","lastTransitionTime":"2025-12-03T10:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.378173 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.378581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.378665 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.378768 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.378849 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:03Z","lastTransitionTime":"2025-12-03T10:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.412149 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.429617 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.440859 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:59Z\\\",\\\"message\\\":\\\"-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1203 10:53:59.141875 6384 ovnkube.go:599] Stopped ovnkube\\\\nI1203 10:53:59.141885 6384 services_controller.go:452] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141926 6384 services_controller.go:453] Built service openshift-operator-lifecycle-manager/packageserver-service template LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141946 6384 services_controller.go:454] Service openshift-operator-lifecycle-manager/packageserver-service for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1203 10:53:59.142002 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 10:53:59.142067 6384 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.456039 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.472006 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.481718 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.481820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.481852 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.481889 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.481915 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:03Z","lastTransitionTime":"2025-12-03T10:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.495662 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.518227 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.534553 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.552702 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.565123 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.579683 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.585008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.585058 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.585069 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.585087 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.585099 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:03Z","lastTransitionTime":"2025-12-03T10:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.596749 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.616572 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.630144 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.642551 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.656150 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.669577 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.683197 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:03Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.688718 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.688769 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.688780 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.688802 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.688815 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:03Z","lastTransitionTime":"2025-12-03T10:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.792173 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.792259 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.792282 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.792314 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.792338 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:03Z","lastTransitionTime":"2025-12-03T10:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.896007 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.896078 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.896104 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.896133 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.896153 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:03Z","lastTransitionTime":"2025-12-03T10:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.999820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.999870 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.999883 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.999899 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:03 crc kubenswrapper[4756]: I1203 10:54:03.999911 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:03Z","lastTransitionTime":"2025-12-03T10:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.104622 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.104693 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.104712 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.104738 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.104760 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:04Z","lastTransitionTime":"2025-12-03T10:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.208071 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.208145 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.208157 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.208177 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.208193 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:04Z","lastTransitionTime":"2025-12-03T10:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.311534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.311616 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.311635 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.311669 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.311690 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:04Z","lastTransitionTime":"2025-12-03T10:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.415015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.415081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.415096 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.415117 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.415132 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:04Z","lastTransitionTime":"2025-12-03T10:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.518244 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.518308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.518324 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.518350 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.518369 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:04Z","lastTransitionTime":"2025-12-03T10:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.622771 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.622842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.622864 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.622897 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.622919 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:04Z","lastTransitionTime":"2025-12-03T10:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.727017 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.727094 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.727117 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.727150 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.727181 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:04Z","lastTransitionTime":"2025-12-03T10:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.831581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.831662 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.831683 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.831716 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.831743 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:04Z","lastTransitionTime":"2025-12-03T10:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.935353 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.935422 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.935439 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.935464 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:04 crc kubenswrapper[4756]: I1203 10:54:04.935481 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:04Z","lastTransitionTime":"2025-12-03T10:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.039098 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.039156 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.039168 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.039188 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.039198 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:05Z","lastTransitionTime":"2025-12-03T10:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.143126 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.143469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.143631 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.143746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.143852 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:05Z","lastTransitionTime":"2025-12-03T10:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.233818 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.233979 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.233981 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:05 crc kubenswrapper[4756]: E1203 10:54:05.234564 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:05 crc kubenswrapper[4756]: E1203 10:54:05.234346 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.234161 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:05 crc kubenswrapper[4756]: E1203 10:54:05.234730 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:05 crc kubenswrapper[4756]: E1203 10:54:05.234864 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.246919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.246992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.247006 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.247023 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.247038 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:05Z","lastTransitionTime":"2025-12-03T10:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.350321 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.350398 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.350417 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.350444 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.350462 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:05Z","lastTransitionTime":"2025-12-03T10:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.454981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.455399 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.455491 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.455667 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.455753 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:05Z","lastTransitionTime":"2025-12-03T10:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.558892 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.558933 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.558945 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.558979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.558991 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:05Z","lastTransitionTime":"2025-12-03T10:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.662026 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.662108 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.662134 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.662158 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.662174 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:05Z","lastTransitionTime":"2025-12-03T10:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.765555 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.765600 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.765610 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.765627 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.765640 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:05Z","lastTransitionTime":"2025-12-03T10:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.869312 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.869400 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.869415 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.869435 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.869452 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:05Z","lastTransitionTime":"2025-12-03T10:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.971988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.972038 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.972056 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.972076 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:05 crc kubenswrapper[4756]: I1203 10:54:05.972092 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:05Z","lastTransitionTime":"2025-12-03T10:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.074981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.075038 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.075049 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.075068 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.075085 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:06Z","lastTransitionTime":"2025-12-03T10:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.178867 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.178932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.178990 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.179019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.179039 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:06Z","lastTransitionTime":"2025-12-03T10:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.282179 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.282233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.282250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.282273 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.282290 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:06Z","lastTransitionTime":"2025-12-03T10:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.386081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.386132 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.386147 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.386168 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.386181 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:06Z","lastTransitionTime":"2025-12-03T10:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.490239 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.490311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.490324 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.490352 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.490367 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:06Z","lastTransitionTime":"2025-12-03T10:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.593464 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.593509 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.593521 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.593538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.593550 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:06Z","lastTransitionTime":"2025-12-03T10:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.696479 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.696539 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.696556 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.696583 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.696602 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:06Z","lastTransitionTime":"2025-12-03T10:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.800232 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.800305 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.800320 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.800343 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.800358 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:06Z","lastTransitionTime":"2025-12-03T10:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.903067 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.903308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.903323 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.903344 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:06 crc kubenswrapper[4756]: I1203 10:54:06.903358 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:06Z","lastTransitionTime":"2025-12-03T10:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.006295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.006343 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.006354 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.006370 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.006382 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:07Z","lastTransitionTime":"2025-12-03T10:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.109010 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.109043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.109053 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.109072 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.109089 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:07Z","lastTransitionTime":"2025-12-03T10:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.211994 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.212107 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.212131 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.212160 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.212183 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:07Z","lastTransitionTime":"2025-12-03T10:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.233007 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.233107 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.233012 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:07 crc kubenswrapper[4756]: E1203 10:54:07.233160 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.233108 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:07 crc kubenswrapper[4756]: E1203 10:54:07.233240 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:07 crc kubenswrapper[4756]: E1203 10:54:07.233322 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:07 crc kubenswrapper[4756]: E1203 10:54:07.233429 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.315303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.315386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.315408 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.315436 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.315457 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:07Z","lastTransitionTime":"2025-12-03T10:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.418545 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.418585 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.418595 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.418610 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.418621 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:07Z","lastTransitionTime":"2025-12-03T10:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.522428 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.522493 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.522510 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.522534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.522553 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:07Z","lastTransitionTime":"2025-12-03T10:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.626134 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.626286 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.626320 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.626364 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.626390 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:07Z","lastTransitionTime":"2025-12-03T10:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.729903 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.729990 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.730003 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.730024 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.730040 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:07Z","lastTransitionTime":"2025-12-03T10:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.832524 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.832568 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.832576 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.832591 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.832603 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:07Z","lastTransitionTime":"2025-12-03T10:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.936795 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.936847 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.936861 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.936884 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:07 crc kubenswrapper[4756]: I1203 10:54:07.936903 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:07Z","lastTransitionTime":"2025-12-03T10:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.039807 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.039891 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.040014 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.040059 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.040086 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:08Z","lastTransitionTime":"2025-12-03T10:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.143077 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.144142 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.144186 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.144215 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.144239 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:08Z","lastTransitionTime":"2025-12-03T10:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.248066 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.248235 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.248269 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.248296 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.248314 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:08Z","lastTransitionTime":"2025-12-03T10:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.351626 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.351717 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.351742 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.351771 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.351790 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:08Z","lastTransitionTime":"2025-12-03T10:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.455715 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.455791 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.455810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.455837 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.455889 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:08Z","lastTransitionTime":"2025-12-03T10:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.559055 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.559655 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.559805 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.560042 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.560217 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:08Z","lastTransitionTime":"2025-12-03T10:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.663876 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.663944 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.663997 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.664025 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.664048 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:08Z","lastTransitionTime":"2025-12-03T10:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.773992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.774047 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.774058 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.774077 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.774087 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:08Z","lastTransitionTime":"2025-12-03T10:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.877586 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.877649 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.877668 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.877698 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.877718 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:08Z","lastTransitionTime":"2025-12-03T10:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.980485 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.980539 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.980554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.980578 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:08 crc kubenswrapper[4756]: I1203 10:54:08.980593 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:08Z","lastTransitionTime":"2025-12-03T10:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.083697 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.083741 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.083782 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.083803 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.083815 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:09Z","lastTransitionTime":"2025-12-03T10:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.185449 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.185937 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.185969 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.185992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.186007 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:09Z","lastTransitionTime":"2025-12-03T10:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.233999 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.234064 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.234069 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:09 crc kubenswrapper[4756]: E1203 10:54:09.234443 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.234653 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:09 crc kubenswrapper[4756]: E1203 10:54:09.235146 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:09 crc kubenswrapper[4756]: E1203 10:54:09.235993 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:09 crc kubenswrapper[4756]: E1203 10:54:09.236194 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.258688 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.275346 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.290263 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.290333 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.290349 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.290376 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.290394 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:09Z","lastTransitionTime":"2025-12-03T10:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.294542 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.327861 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.346035 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.387620 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:59Z\\\",\\\"message\\\":\\\"-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1203 10:53:59.141875 6384 ovnkube.go:599] Stopped ovnkube\\\\nI1203 10:53:59.141885 6384 services_controller.go:452] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141926 6384 services_controller.go:453] Built service openshift-operator-lifecycle-manager/packageserver-service template LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141946 6384 services_controller.go:454] Service openshift-operator-lifecycle-manager/packageserver-service for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1203 10:53:59.142002 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 10:53:59.142067 6384 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.392758 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.392867 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.392887 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.392915 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.392935 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:09Z","lastTransitionTime":"2025-12-03T10:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.404468 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.420742 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.437947 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9df49311-9ea2-411a-9627-695fbd0b6248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc2a0487fcc32cb6cd148976f14df4e7cf8c6e8cb06d7cc8365740484c30b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad98945b1e87deede05a7eda2f9adaddb0b884850dad8a9d6a8d1a5e5df02d12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c0654e2a6bce50e493d1f0119b8ef84de2222f31a901130d30e2acdf8b6fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.452369 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.469310 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.487487 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.495491 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.495535 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.495546 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.495569 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.495586 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:09Z","lastTransitionTime":"2025-12-03T10:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.500647 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.515520 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.532274 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.549697 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.570014 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:09Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.599156 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.599241 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.599267 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.599299 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.599323 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:09Z","lastTransitionTime":"2025-12-03T10:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.702685 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.702741 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.702759 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.702783 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.702803 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:09Z","lastTransitionTime":"2025-12-03T10:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.806821 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.806903 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.806925 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.806997 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.807020 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:09Z","lastTransitionTime":"2025-12-03T10:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.910099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.910165 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.910186 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.910213 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:09 crc kubenswrapper[4756]: I1203 10:54:09.910225 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:09Z","lastTransitionTime":"2025-12-03T10:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.013660 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.013724 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.013748 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.013773 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.013791 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:10Z","lastTransitionTime":"2025-12-03T10:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.117864 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.117912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.117925 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.117945 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.117975 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:10Z","lastTransitionTime":"2025-12-03T10:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.220891 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.220925 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.220934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.220969 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.220983 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:10Z","lastTransitionTime":"2025-12-03T10:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.338375 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.338405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.338415 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.338430 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.338440 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:10Z","lastTransitionTime":"2025-12-03T10:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.441994 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.442041 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.442076 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.442096 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.442107 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:10Z","lastTransitionTime":"2025-12-03T10:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.545593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.545639 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.545648 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.545668 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.545678 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:10Z","lastTransitionTime":"2025-12-03T10:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.649196 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.649249 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.649259 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.649277 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.649289 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:10Z","lastTransitionTime":"2025-12-03T10:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.751347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.752004 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.752219 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.752342 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.752418 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:10Z","lastTransitionTime":"2025-12-03T10:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.855631 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.855662 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.855672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.855689 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.855700 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:10Z","lastTransitionTime":"2025-12-03T10:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.958599 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.959004 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.959092 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.959169 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:10 crc kubenswrapper[4756]: I1203 10:54:10.959242 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:10Z","lastTransitionTime":"2025-12-03T10:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.062623 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.062678 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.062690 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.062709 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.062722 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:11Z","lastTransitionTime":"2025-12-03T10:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.166246 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.166293 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.166305 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.166323 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.166336 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:11Z","lastTransitionTime":"2025-12-03T10:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.233483 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.233577 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.233523 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:11 crc kubenswrapper[4756]: E1203 10:54:11.233716 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:11 crc kubenswrapper[4756]: E1203 10:54:11.233885 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:11 crc kubenswrapper[4756]: E1203 10:54:11.233965 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.234106 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:11 crc kubenswrapper[4756]: E1203 10:54:11.234312 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.269587 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.269642 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.269658 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.269675 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.269686 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:11Z","lastTransitionTime":"2025-12-03T10:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.372068 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.372095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.372104 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.372121 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.372134 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:11Z","lastTransitionTime":"2025-12-03T10:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.474844 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.474887 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.474897 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.474914 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.474926 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:11Z","lastTransitionTime":"2025-12-03T10:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.576880 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.577525 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.577601 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.577686 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.577751 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:11Z","lastTransitionTime":"2025-12-03T10:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.680412 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.680447 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.680455 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.680468 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.680478 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:11Z","lastTransitionTime":"2025-12-03T10:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.784040 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.784095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.784108 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.784128 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.784141 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:11Z","lastTransitionTime":"2025-12-03T10:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.886769 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.886817 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.886829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.886844 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.886860 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:11Z","lastTransitionTime":"2025-12-03T10:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.989995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.990029 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.990038 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.990052 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:11 crc kubenswrapper[4756]: I1203 10:54:11.990062 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:11Z","lastTransitionTime":"2025-12-03T10:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.093791 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.094192 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.094283 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.094371 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.094484 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:12Z","lastTransitionTime":"2025-12-03T10:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.198189 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.198524 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.198714 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.198856 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.199015 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:12Z","lastTransitionTime":"2025-12-03T10:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.301555 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.301891 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.302007 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.302119 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.302215 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:12Z","lastTransitionTime":"2025-12-03T10:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.406987 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.407052 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.407069 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.407094 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.407108 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:12Z","lastTransitionTime":"2025-12-03T10:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.509690 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.510079 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.510186 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.510268 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.510338 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:12Z","lastTransitionTime":"2025-12-03T10:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.613043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.613100 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.613114 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.613135 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.613150 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:12Z","lastTransitionTime":"2025-12-03T10:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.716532 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.716590 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.716603 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.716623 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.716637 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:12Z","lastTransitionTime":"2025-12-03T10:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.819758 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.819821 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.819848 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.819880 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.819904 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:12Z","lastTransitionTime":"2025-12-03T10:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.923207 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.923556 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.923726 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.923884 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:12 crc kubenswrapper[4756]: I1203 10:54:12.923981 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:12Z","lastTransitionTime":"2025-12-03T10:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.027069 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.027209 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.027235 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.027270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.027295 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.130538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.130604 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.130619 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.130636 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.130647 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.232854 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:13 crc kubenswrapper[4756]: E1203 10:54:13.233153 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.233338 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.233507 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.233609 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:13 crc kubenswrapper[4756]: E1203 10:54:13.233698 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:13 crc kubenswrapper[4756]: E1203 10:54:13.233797 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:13 crc kubenswrapper[4756]: E1203 10:54:13.233918 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.235492 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.235534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.235543 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.235559 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.235570 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.285167 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.285229 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.285240 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.285257 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.285271 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: E1203 10:54:13.301669 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:13Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.305885 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.306212 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.306288 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.306376 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.306462 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: E1203 10:54:13.319283 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:13Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.323917 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.323944 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.323974 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.323992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.324000 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: E1203 10:54:13.337427 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:13Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.341494 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.341533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.341542 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.341560 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.341576 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: E1203 10:54:13.354476 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:13Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.359410 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.359443 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.359452 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.359470 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.359483 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: E1203 10:54:13.373530 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:13Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:13 crc kubenswrapper[4756]: E1203 10:54:13.373675 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.376353 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.376396 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.376405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.376423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.376433 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.479885 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.479944 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.479987 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.480013 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.480031 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.582979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.583018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.583029 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.583071 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.583082 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.686627 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.686831 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.686857 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.686881 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.686898 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.789008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.789069 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.789081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.789099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.789109 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.891728 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.891794 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.891818 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.891850 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.891875 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.995230 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.995289 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.995300 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.995321 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:13 crc kubenswrapper[4756]: I1203 10:54:13.995341 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:13Z","lastTransitionTime":"2025-12-03T10:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.097858 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.097899 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.097908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.097921 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.097929 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:14Z","lastTransitionTime":"2025-12-03T10:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.200982 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.201037 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.201047 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.201068 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.201084 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:14Z","lastTransitionTime":"2025-12-03T10:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.234686 4756 scope.go:117] "RemoveContainer" containerID="1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec" Dec 03 10:54:14 crc kubenswrapper[4756]: E1203 10:54:14.234932 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.304347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.304396 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.304408 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.304429 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.304444 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:14Z","lastTransitionTime":"2025-12-03T10:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.407560 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.407900 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.408024 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.408118 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.408197 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:14Z","lastTransitionTime":"2025-12-03T10:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.512237 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.512292 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.512311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.512336 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.512353 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:14Z","lastTransitionTime":"2025-12-03T10:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.615983 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.616084 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.616098 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.616123 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.616138 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:14Z","lastTransitionTime":"2025-12-03T10:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.719028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.719110 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.719125 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.719143 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.719178 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:14Z","lastTransitionTime":"2025-12-03T10:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.822045 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.822081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.822089 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.822148 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.822159 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:14Z","lastTransitionTime":"2025-12-03T10:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.924896 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.924966 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.924979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.925000 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:14 crc kubenswrapper[4756]: I1203 10:54:14.925015 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:14Z","lastTransitionTime":"2025-12-03T10:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.028176 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.028245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.028264 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.028293 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.028309 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:15Z","lastTransitionTime":"2025-12-03T10:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.131874 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.131921 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.131931 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.131946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.131985 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:15Z","lastTransitionTime":"2025-12-03T10:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.233240 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.233293 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.233314 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:15 crc kubenswrapper[4756]: E1203 10:54:15.233410 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.233430 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:15 crc kubenswrapper[4756]: E1203 10:54:15.233602 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:15 crc kubenswrapper[4756]: E1203 10:54:15.233675 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:15 crc kubenswrapper[4756]: E1203 10:54:15.233763 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.235913 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.236020 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.236045 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.236072 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.236086 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:15Z","lastTransitionTime":"2025-12-03T10:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.395412 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.395481 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.395492 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.395511 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.395521 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:15Z","lastTransitionTime":"2025-12-03T10:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.498370 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.498428 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.498439 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.498459 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.498470 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:15Z","lastTransitionTime":"2025-12-03T10:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.602285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.602365 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.602380 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.602402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.602417 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:15Z","lastTransitionTime":"2025-12-03T10:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.705312 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.705371 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.705385 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.705402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.705417 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:15Z","lastTransitionTime":"2025-12-03T10:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.800541 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs\") pod \"network-metrics-daemon-qvt7n\" (UID: \"cd88c3db-a819-4fb9-a952-30dc1b67c375\") " pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:15 crc kubenswrapper[4756]: E1203 10:54:15.800769 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:54:15 crc kubenswrapper[4756]: E1203 10:54:15.800912 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs podName:cd88c3db-a819-4fb9-a952-30dc1b67c375 nodeName:}" failed. No retries permitted until 2025-12-03 10:54:47.800875476 +0000 UTC m=+98.830876900 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs") pod "network-metrics-daemon-qvt7n" (UID: "cd88c3db-a819-4fb9-a952-30dc1b67c375") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.808350 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.808390 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.808403 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.808423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.808435 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:15Z","lastTransitionTime":"2025-12-03T10:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.911460 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.911521 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.911541 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.911563 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:15 crc kubenswrapper[4756]: I1203 10:54:15.911583 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:15Z","lastTransitionTime":"2025-12-03T10:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.014753 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.014808 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.014821 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.014844 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.014856 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:16Z","lastTransitionTime":"2025-12-03T10:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.117634 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.117703 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.117731 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.117765 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.117855 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:16Z","lastTransitionTime":"2025-12-03T10:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.220659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.220711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.220723 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.220744 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.220760 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:16Z","lastTransitionTime":"2025-12-03T10:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.324557 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.324603 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.324615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.324633 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.324646 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:16Z","lastTransitionTime":"2025-12-03T10:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.427356 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.427392 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.427405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.427423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.427433 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:16Z","lastTransitionTime":"2025-12-03T10:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.530751 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.530826 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.530848 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.530882 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.530906 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:16Z","lastTransitionTime":"2025-12-03T10:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.634103 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.634160 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.634174 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.634192 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.634202 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:16Z","lastTransitionTime":"2025-12-03T10:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.645788 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xwtn_d0dad5dd-86f8-4a8a-aed6-dd07123c5058/kube-multus/0.log" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.645862 4756 generic.go:334] "Generic (PLEG): container finished" podID="d0dad5dd-86f8-4a8a-aed6-dd07123c5058" containerID="c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c" exitCode=1 Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.645910 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xwtn" event={"ID":"d0dad5dd-86f8-4a8a-aed6-dd07123c5058","Type":"ContainerDied","Data":"c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c"} Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.646782 4756 scope.go:117] "RemoveContainer" containerID="c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.666863 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:59Z\\\",\\\"message\\\":\\\"-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1203 10:53:59.141875 6384 ovnkube.go:599] Stopped ovnkube\\\\nI1203 10:53:59.141885 6384 services_controller.go:452] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141926 6384 services_controller.go:453] Built service openshift-operator-lifecycle-manager/packageserver-service template LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141946 6384 services_controller.go:454] Service openshift-operator-lifecycle-manager/packageserver-service for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1203 10:53:59.142002 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 10:53:59.142067 6384 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.679004 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.692752 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.715600 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.732317 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:16Z\\\",\\\"message\\\":\\\"2025-12-03T10:53:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021\\\\n2025-12-03T10:53:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021 to /host/opt/cni/bin/\\\\n2025-12-03T10:53:31Z [verbose] multus-daemon started\\\\n2025-12-03T10:53:31Z [verbose] Readiness Indicator file check\\\\n2025-12-03T10:54:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.738251 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.738594 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.738698 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.738803 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.738893 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:16Z","lastTransitionTime":"2025-12-03T10:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.747032 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.760482 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.775924 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.790391 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.804834 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9df49311-9ea2-411a-9627-695fbd0b6248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc2a0487fcc32cb6cd148976f14df4e7cf8c6e8cb06d7cc8365740484c30b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad98945b1e87deede05a7eda2f9adaddb0b884850dad8a9d6a8d1a5e5df02d12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c0654e2a6bce50e493d1f0119b8ef84de2222f31a901130d30e2acdf8b6fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.823066 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.838643 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.842652 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.842700 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.842712 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.842729 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.842741 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:16Z","lastTransitionTime":"2025-12-03T10:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.854809 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.867659 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.879572 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.892736 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.904884 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:16Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.946403 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.946450 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.946459 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.946479 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:16 crc kubenswrapper[4756]: I1203 10:54:16.946499 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:16Z","lastTransitionTime":"2025-12-03T10:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.049582 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.049618 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.049626 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.049639 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.049651 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:17Z","lastTransitionTime":"2025-12-03T10:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.152835 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.152900 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.152915 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.152942 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.152973 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:17Z","lastTransitionTime":"2025-12-03T10:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.233853 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.233891 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.234020 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.234096 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:17 crc kubenswrapper[4756]: E1203 10:54:17.234059 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:17 crc kubenswrapper[4756]: E1203 10:54:17.234257 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:17 crc kubenswrapper[4756]: E1203 10:54:17.234424 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:17 crc kubenswrapper[4756]: E1203 10:54:17.234593 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.256205 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.256257 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.256268 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.256285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.256297 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:17Z","lastTransitionTime":"2025-12-03T10:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.358886 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.358931 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.358943 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.358978 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.358988 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:17Z","lastTransitionTime":"2025-12-03T10:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.462347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.462402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.462417 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.462440 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.462488 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:17Z","lastTransitionTime":"2025-12-03T10:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.565412 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.565458 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.565471 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.565491 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.565507 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:17Z","lastTransitionTime":"2025-12-03T10:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.653984 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xwtn_d0dad5dd-86f8-4a8a-aed6-dd07123c5058/kube-multus/0.log" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.654061 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xwtn" event={"ID":"d0dad5dd-86f8-4a8a-aed6-dd07123c5058","Type":"ContainerStarted","Data":"315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6"} Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.668907 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.668968 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.668982 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.669001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.669014 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:17Z","lastTransitionTime":"2025-12-03T10:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.673560 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.701175 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.721992 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:16Z\\\",\\\"message\\\":\\\"2025-12-03T10:53:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021\\\\n2025-12-03T10:53:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021 to /host/opt/cni/bin/\\\\n2025-12-03T10:53:31Z [verbose] multus-daemon started\\\\n2025-12-03T10:53:31Z [verbose] Readiness Indicator file check\\\\n2025-12-03T10:54:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.747163 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:59Z\\\",\\\"message\\\":\\\"-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1203 10:53:59.141875 6384 ovnkube.go:599] Stopped ovnkube\\\\nI1203 10:53:59.141885 6384 services_controller.go:452] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141926 6384 services_controller.go:453] Built service openshift-operator-lifecycle-manager/packageserver-service template LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141946 6384 services_controller.go:454] Service openshift-operator-lifecycle-manager/packageserver-service for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1203 10:53:59.142002 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 10:53:59.142067 6384 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.761259 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.772218 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.772271 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.772286 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.772307 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.772326 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:17Z","lastTransitionTime":"2025-12-03T10:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.776342 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.788784 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9df49311-9ea2-411a-9627-695fbd0b6248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc2a0487fcc32cb6cd148976f14df4e7cf8c6e8cb06d7cc8365740484c30b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad98945b1e87deede05a7eda2f9adaddb0b884850dad8a9d6a8d1a5e5df02d12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c0654e2a6bce50e493d1f0119b8ef84de2222f31a901130d30e2acdf8b6fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.801461 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.816266 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.831667 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.842790 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.853644 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.867499 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.874935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.875012 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.875027 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.875047 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.875062 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:17Z","lastTransitionTime":"2025-12-03T10:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.879655 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.894447 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.907320 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.919123 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:17Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.978758 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.978795 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.978808 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.978830 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:17 crc kubenswrapper[4756]: I1203 10:54:17.978843 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:17Z","lastTransitionTime":"2025-12-03T10:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.081448 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.081498 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.081512 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.081530 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.081545 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:18Z","lastTransitionTime":"2025-12-03T10:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.183818 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.183851 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.183862 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.183880 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.183890 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:18Z","lastTransitionTime":"2025-12-03T10:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.286837 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.286899 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.286922 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.286981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.287005 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:18Z","lastTransitionTime":"2025-12-03T10:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.389856 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.389939 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.390007 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.390043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.390069 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:18Z","lastTransitionTime":"2025-12-03T10:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.492479 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.492525 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.492534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.492549 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.492561 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:18Z","lastTransitionTime":"2025-12-03T10:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.595537 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.595597 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.595611 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.595632 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.595651 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:18Z","lastTransitionTime":"2025-12-03T10:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.700548 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.700666 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.700693 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.700727 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.700752 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:18Z","lastTransitionTime":"2025-12-03T10:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.805059 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.805122 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.805134 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.805156 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.805171 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:18Z","lastTransitionTime":"2025-12-03T10:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.907695 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.907749 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.907762 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.907784 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:18 crc kubenswrapper[4756]: I1203 10:54:18.907798 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:18Z","lastTransitionTime":"2025-12-03T10:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.010754 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.010803 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.010812 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.010836 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.010847 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:19Z","lastTransitionTime":"2025-12-03T10:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.114129 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.114165 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.114176 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.114216 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.114226 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:19Z","lastTransitionTime":"2025-12-03T10:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.217471 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.217514 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.217524 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.217540 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.217551 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:19Z","lastTransitionTime":"2025-12-03T10:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.233342 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:19 crc kubenswrapper[4756]: E1203 10:54:19.233628 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.233998 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:19 crc kubenswrapper[4756]: E1203 10:54:19.234105 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.234255 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:19 crc kubenswrapper[4756]: E1203 10:54:19.234372 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.234373 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:19 crc kubenswrapper[4756]: E1203 10:54:19.234633 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.249285 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.265754 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.278583 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.293131 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.309661 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:16Z\\\",\\\"message\\\":\\\"2025-12-03T10:53:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021\\\\n2025-12-03T10:53:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021 to /host/opt/cni/bin/\\\\n2025-12-03T10:53:31Z [verbose] multus-daemon started\\\\n2025-12-03T10:53:31Z [verbose] Readiness Indicator file check\\\\n2025-12-03T10:54:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.320036 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.320071 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.320081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.320099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.320110 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:19Z","lastTransitionTime":"2025-12-03T10:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.329316 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:59Z\\\",\\\"message\\\":\\\"-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1203 10:53:59.141875 6384 ovnkube.go:599] Stopped ovnkube\\\\nI1203 10:53:59.141885 6384 services_controller.go:452] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141926 6384 services_controller.go:453] Built service openshift-operator-lifecycle-manager/packageserver-service template LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141946 6384 services_controller.go:454] Service openshift-operator-lifecycle-manager/packageserver-service for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1203 10:53:59.142002 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 10:53:59.142067 6384 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.341877 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.359228 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.371558 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9df49311-9ea2-411a-9627-695fbd0b6248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc2a0487fcc32cb6cd148976f14df4e7cf8c6e8cb06d7cc8365740484c30b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad98945b1e87deede05a7eda2f9adaddb0b884850dad8a9d6a8d1a5e5df02d12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c0654e2a6bce50e493d1f0119b8ef84de2222f31a901130d30e2acdf8b6fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.384476 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.402502 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.414500 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.422193 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.422273 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.422313 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.422335 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.422348 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:19Z","lastTransitionTime":"2025-12-03T10:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.427997 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.442339 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.458280 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.474508 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.487172 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:19Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.525277 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.525344 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.525360 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.525383 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.525402 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:19Z","lastTransitionTime":"2025-12-03T10:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.628457 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.628514 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.628527 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.628549 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.628564 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:19Z","lastTransitionTime":"2025-12-03T10:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.732403 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.732440 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.732451 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.732469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.732478 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:19Z","lastTransitionTime":"2025-12-03T10:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.835508 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.835569 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.835588 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.835617 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.835635 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:19Z","lastTransitionTime":"2025-12-03T10:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.938725 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.938775 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.938794 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.938822 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:19 crc kubenswrapper[4756]: I1203 10:54:19.938839 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:19Z","lastTransitionTime":"2025-12-03T10:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.042121 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.042184 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.042196 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.042220 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.042233 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:20Z","lastTransitionTime":"2025-12-03T10:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.145246 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.145323 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.145349 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.145384 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.145406 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:20Z","lastTransitionTime":"2025-12-03T10:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.248186 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.248236 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.248249 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.248269 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.248283 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:20Z","lastTransitionTime":"2025-12-03T10:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.351048 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.351106 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.351125 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.351154 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.351175 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:20Z","lastTransitionTime":"2025-12-03T10:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.453659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.453703 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.453716 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.453738 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.453752 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:20Z","lastTransitionTime":"2025-12-03T10:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.557363 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.557431 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.557442 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.557462 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.557472 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:20Z","lastTransitionTime":"2025-12-03T10:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.660780 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.660820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.660831 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.660849 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.660898 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:20Z","lastTransitionTime":"2025-12-03T10:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.763474 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.763524 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.763534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.763552 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.763566 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:20Z","lastTransitionTime":"2025-12-03T10:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.867225 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.867304 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.867322 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.867350 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.867368 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:20Z","lastTransitionTime":"2025-12-03T10:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.969753 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.969825 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.969843 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.969872 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:20 crc kubenswrapper[4756]: I1203 10:54:20.969890 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:20Z","lastTransitionTime":"2025-12-03T10:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.072924 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.072993 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.073008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.073031 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.073048 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:21Z","lastTransitionTime":"2025-12-03T10:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.176162 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.176227 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.176239 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.176257 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.176270 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:21Z","lastTransitionTime":"2025-12-03T10:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.234341 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.234453 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.234527 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:21 crc kubenswrapper[4756]: E1203 10:54:21.234531 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.234658 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:21 crc kubenswrapper[4756]: E1203 10:54:21.234787 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:21 crc kubenswrapper[4756]: E1203 10:54:21.234985 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:21 crc kubenswrapper[4756]: E1203 10:54:21.235110 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.279079 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.279135 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.279145 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.279170 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.279182 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:21Z","lastTransitionTime":"2025-12-03T10:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.382553 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.382608 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.382622 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.382641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.382655 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:21Z","lastTransitionTime":"2025-12-03T10:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.485475 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.485536 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.485548 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.485571 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.485585 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:21Z","lastTransitionTime":"2025-12-03T10:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.588985 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.589024 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.589034 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.589071 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.589084 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:21Z","lastTransitionTime":"2025-12-03T10:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.691762 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.691838 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.691855 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.691879 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.691897 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:21Z","lastTransitionTime":"2025-12-03T10:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.794808 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.794839 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.794846 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.794860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.794871 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:21Z","lastTransitionTime":"2025-12-03T10:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.907409 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.907494 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.907510 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.907533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:21 crc kubenswrapper[4756]: I1203 10:54:21.907547 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:21Z","lastTransitionTime":"2025-12-03T10:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.011256 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.011319 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.011332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.011353 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.011369 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:22Z","lastTransitionTime":"2025-12-03T10:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.114926 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.115038 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.115064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.115092 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.115111 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:22Z","lastTransitionTime":"2025-12-03T10:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.217912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.218005 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.218022 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.218047 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.218066 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:22Z","lastTransitionTime":"2025-12-03T10:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.250708 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.321924 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.322017 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.322036 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.322065 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.322088 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:22Z","lastTransitionTime":"2025-12-03T10:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.425745 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.425860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.426115 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.426144 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.426373 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:22Z","lastTransitionTime":"2025-12-03T10:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.529516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.529592 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.529629 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.529679 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.529692 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:22Z","lastTransitionTime":"2025-12-03T10:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.633388 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.633443 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.633462 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.633488 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.633512 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:22Z","lastTransitionTime":"2025-12-03T10:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.736608 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.736669 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.736686 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.736712 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.736732 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:22Z","lastTransitionTime":"2025-12-03T10:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.840489 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.840554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.840572 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.840604 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.840623 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:22Z","lastTransitionTime":"2025-12-03T10:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.944361 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.944433 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.944458 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.944496 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:22 crc kubenswrapper[4756]: I1203 10:54:22.944523 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:22Z","lastTransitionTime":"2025-12-03T10:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.048262 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.048316 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.048333 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.048362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.048380 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:23Z","lastTransitionTime":"2025-12-03T10:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.151436 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.151508 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.151520 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.151544 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.151559 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:23Z","lastTransitionTime":"2025-12-03T10:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.233680 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.233718 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.233803 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:23 crc kubenswrapper[4756]: E1203 10:54:23.233850 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.234006 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:23 crc kubenswrapper[4756]: E1203 10:54:23.234159 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:23 crc kubenswrapper[4756]: E1203 10:54:23.234278 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:23 crc kubenswrapper[4756]: E1203 10:54:23.234381 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.254190 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.254230 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.254242 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.254261 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.254274 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:23Z","lastTransitionTime":"2025-12-03T10:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.357908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.357979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.357992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.358013 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.358025 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:23Z","lastTransitionTime":"2025-12-03T10:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.460777 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.460831 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.460848 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.460871 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.460895 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:23Z","lastTransitionTime":"2025-12-03T10:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.563730 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.563799 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.563820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.563847 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.563866 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:23Z","lastTransitionTime":"2025-12-03T10:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.579934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.580035 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.580054 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.580081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.580104 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:23Z","lastTransitionTime":"2025-12-03T10:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:23 crc kubenswrapper[4756]: E1203 10:54:23.603333 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:23Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.610181 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.610263 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.610283 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.610319 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.610341 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:23Z","lastTransitionTime":"2025-12-03T10:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:23 crc kubenswrapper[4756]: E1203 10:54:23.639292 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:23Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.650361 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.650437 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.650452 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.650473 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.650487 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:23Z","lastTransitionTime":"2025-12-03T10:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:23 crc kubenswrapper[4756]: E1203 10:54:23.673445 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:23Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.682100 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.682159 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.682174 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.682214 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.682228 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:23Z","lastTransitionTime":"2025-12-03T10:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:23 crc kubenswrapper[4756]: E1203 10:54:23.700836 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:23Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.708226 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.708392 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.708413 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.708440 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.708464 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:23Z","lastTransitionTime":"2025-12-03T10:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:23 crc kubenswrapper[4756]: E1203 10:54:23.726701 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:23Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:23 crc kubenswrapper[4756]: E1203 10:54:23.727077 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.729115 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.729178 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.729195 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.729216 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.729228 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:23Z","lastTransitionTime":"2025-12-03T10:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.832856 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.832918 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.832937 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.833009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.833030 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:23Z","lastTransitionTime":"2025-12-03T10:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.935795 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.936160 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.936238 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.936311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:23 crc kubenswrapper[4756]: I1203 10:54:23.936373 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:23Z","lastTransitionTime":"2025-12-03T10:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.039565 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.039641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.039655 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.039679 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.039690 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:24Z","lastTransitionTime":"2025-12-03T10:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.142660 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.142724 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.142734 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.142749 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.142760 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:24Z","lastTransitionTime":"2025-12-03T10:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.245035 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.245072 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.245080 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.245091 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.245100 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:24Z","lastTransitionTime":"2025-12-03T10:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.349508 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.349561 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.349570 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.349591 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.349605 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:24Z","lastTransitionTime":"2025-12-03T10:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.454199 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.454269 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.454288 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.454310 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.454324 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:24Z","lastTransitionTime":"2025-12-03T10:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.556709 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.556767 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.556778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.556795 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.556807 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:24Z","lastTransitionTime":"2025-12-03T10:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.659800 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.659845 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.659855 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.659874 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.659893 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:24Z","lastTransitionTime":"2025-12-03T10:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.761802 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.761851 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.761862 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.761878 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.761889 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:24Z","lastTransitionTime":"2025-12-03T10:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.864195 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.864244 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.864253 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.864269 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.864280 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:24Z","lastTransitionTime":"2025-12-03T10:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.966995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.967055 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.967073 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.967095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:24 crc kubenswrapper[4756]: I1203 10:54:24.967113 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:24Z","lastTransitionTime":"2025-12-03T10:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.070513 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.070577 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.070595 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.070618 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.070637 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:25Z","lastTransitionTime":"2025-12-03T10:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.174144 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.174366 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.174383 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.174407 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.174420 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:25Z","lastTransitionTime":"2025-12-03T10:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.233680 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.233780 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.233810 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:25 crc kubenswrapper[4756]: E1203 10:54:25.233909 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.233936 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:25 crc kubenswrapper[4756]: E1203 10:54:25.234057 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:25 crc kubenswrapper[4756]: E1203 10:54:25.234131 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:25 crc kubenswrapper[4756]: E1203 10:54:25.234241 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.278112 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.278157 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.278166 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.278181 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.278191 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:25Z","lastTransitionTime":"2025-12-03T10:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.381410 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.381455 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.381465 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.381481 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.381494 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:25Z","lastTransitionTime":"2025-12-03T10:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.484910 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.484986 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.485001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.485024 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.485037 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:25Z","lastTransitionTime":"2025-12-03T10:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.587489 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.587549 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.587565 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.587584 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.587596 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:25Z","lastTransitionTime":"2025-12-03T10:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.689722 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.689785 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.689797 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.689816 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.689831 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:25Z","lastTransitionTime":"2025-12-03T10:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.792571 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.792622 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.792631 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.792647 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.792659 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:25Z","lastTransitionTime":"2025-12-03T10:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.895666 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.895711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.895721 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.895742 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.895754 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:25Z","lastTransitionTime":"2025-12-03T10:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.998161 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.998201 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.998212 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.998226 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:25 crc kubenswrapper[4756]: I1203 10:54:25.998236 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:25Z","lastTransitionTime":"2025-12-03T10:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.101582 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.101630 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.101640 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.101663 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.101677 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:26Z","lastTransitionTime":"2025-12-03T10:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.205719 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.205785 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.205809 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.205838 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.205859 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:26Z","lastTransitionTime":"2025-12-03T10:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.309884 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.309948 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.309978 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.309995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.310007 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:26Z","lastTransitionTime":"2025-12-03T10:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.413437 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.413492 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.413507 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.413530 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.413547 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:26Z","lastTransitionTime":"2025-12-03T10:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.516556 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.516605 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.516615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.516628 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.516640 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:26Z","lastTransitionTime":"2025-12-03T10:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.619061 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.619126 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.619137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.619156 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.619171 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:26Z","lastTransitionTime":"2025-12-03T10:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.725413 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.725469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.725483 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.725503 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.725517 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:26Z","lastTransitionTime":"2025-12-03T10:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.828928 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.829024 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.829033 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.829065 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.829077 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:26Z","lastTransitionTime":"2025-12-03T10:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.931683 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.931737 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.931749 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.931766 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:26 crc kubenswrapper[4756]: I1203 10:54:26.931778 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:26Z","lastTransitionTime":"2025-12-03T10:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.034833 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.034890 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.034907 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.034944 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.034979 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:27Z","lastTransitionTime":"2025-12-03T10:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.138209 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.138261 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.138277 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.138301 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.138320 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:27Z","lastTransitionTime":"2025-12-03T10:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.233137 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.233232 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.233378 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:27 crc kubenswrapper[4756]: E1203 10:54:27.233367 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:27 crc kubenswrapper[4756]: E1203 10:54:27.233585 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:27 crc kubenswrapper[4756]: E1203 10:54:27.233684 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.234057 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:27 crc kubenswrapper[4756]: E1203 10:54:27.234185 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.241874 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.241995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.242120 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.242145 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.242164 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:27Z","lastTransitionTime":"2025-12-03T10:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.345123 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.345209 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.345231 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.345262 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.345283 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:27Z","lastTransitionTime":"2025-12-03T10:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.447534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.447622 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.447636 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.447654 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.447668 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:27Z","lastTransitionTime":"2025-12-03T10:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.551106 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.551160 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.551175 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.551196 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.551214 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:27Z","lastTransitionTime":"2025-12-03T10:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.654390 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.654437 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.654446 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.654461 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.654471 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:27Z","lastTransitionTime":"2025-12-03T10:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.757087 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.757117 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.757126 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.757140 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.757150 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:27Z","lastTransitionTime":"2025-12-03T10:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.860064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.860143 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.860161 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.860185 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.860203 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:27Z","lastTransitionTime":"2025-12-03T10:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.962324 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.962381 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.962392 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.962414 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:27 crc kubenswrapper[4756]: I1203 10:54:27.962430 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:27Z","lastTransitionTime":"2025-12-03T10:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.065006 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.065052 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.065061 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.065077 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.065087 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:28Z","lastTransitionTime":"2025-12-03T10:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.168591 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.168632 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.168643 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.168662 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.168677 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:28Z","lastTransitionTime":"2025-12-03T10:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.271009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.271043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.271051 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.271064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.271074 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:28Z","lastTransitionTime":"2025-12-03T10:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.374485 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.374525 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.374534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.374548 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.374563 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:28Z","lastTransitionTime":"2025-12-03T10:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.477449 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.477503 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.477514 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.477530 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.477541 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:28Z","lastTransitionTime":"2025-12-03T10:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.579801 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.579842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.579853 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.579869 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.579878 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:28Z","lastTransitionTime":"2025-12-03T10:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.682812 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.682907 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.682934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.683020 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.683040 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:28Z","lastTransitionTime":"2025-12-03T10:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.786945 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.787048 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.787070 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.787097 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.787120 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:28Z","lastTransitionTime":"2025-12-03T10:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.891055 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.891139 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.891162 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.891192 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.891214 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:28Z","lastTransitionTime":"2025-12-03T10:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.995107 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.995154 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.995166 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.995186 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:28 crc kubenswrapper[4756]: I1203 10:54:28.995198 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:28Z","lastTransitionTime":"2025-12-03T10:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.098240 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.098315 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.098334 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.098363 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.098383 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:29Z","lastTransitionTime":"2025-12-03T10:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.202291 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.202337 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.202350 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.202370 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.202387 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:29Z","lastTransitionTime":"2025-12-03T10:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.233783 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.233798 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:29 crc kubenswrapper[4756]: E1203 10:54:29.233941 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.234033 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.233949 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:29 crc kubenswrapper[4756]: E1203 10:54:29.234158 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:29 crc kubenswrapper[4756]: E1203 10:54:29.234627 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:29 crc kubenswrapper[4756]: E1203 10:54:29.234878 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.235104 4756 scope.go:117] "RemoveContainer" containerID="1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.251722 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.271519 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.282356 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"690986b4-18a2-46a1-9630-146ab8b3b313\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ccc45a014b5221a5e22a35b29df5446825770844e7e9b3e8cc01228954ca18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0aa6c1451fb8c8270fcdd28b234cf83cf94b06fe79cd807cb3889b180a40d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0aa6c1451fb8c8270fcdd28b234cf83cf94b06fe79cd807cb3889b180a40d79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.294425 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.304662 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.304692 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.304701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.304715 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.304726 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:29Z","lastTransitionTime":"2025-12-03T10:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.307514 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.321109 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.334582 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.346005 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.357771 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.376288 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.388134 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:16Z\\\",\\\"message\\\":\\\"2025-12-03T10:53:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021\\\\n2025-12-03T10:53:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021 to /host/opt/cni/bin/\\\\n2025-12-03T10:53:31Z [verbose] multus-daemon started\\\\n2025-12-03T10:53:31Z [verbose] Readiness Indicator file check\\\\n2025-12-03T10:54:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.406679 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.406724 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.406736 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.406752 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.406767 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:29Z","lastTransitionTime":"2025-12-03T10:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.412338 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:59Z\\\",\\\"message\\\":\\\"-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1203 10:53:59.141875 6384 ovnkube.go:599] Stopped ovnkube\\\\nI1203 10:53:59.141885 6384 services_controller.go:452] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141926 6384 services_controller.go:453] Built service openshift-operator-lifecycle-manager/packageserver-service template LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141946 6384 services_controller.go:454] Service openshift-operator-lifecycle-manager/packageserver-service for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1203 10:53:59.142002 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 10:53:59.142067 6384 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.421858 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.431009 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.444661 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.455340 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9df49311-9ea2-411a-9627-695fbd0b6248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc2a0487fcc32cb6cd148976f14df4e7cf8c6e8cb06d7cc8365740484c30b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad98945b1e87deede05a7eda2f9adaddb0b884850dad8a9d6a8d1a5e5df02d12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c0654e2a6bce50e493d1f0119b8ef84de2222f31a901130d30e2acdf8b6fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.469653 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.482719 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:29Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.509465 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.509513 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.509524 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.509540 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.509551 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:29Z","lastTransitionTime":"2025-12-03T10:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.612321 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.612403 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.612416 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.612430 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.612441 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:29Z","lastTransitionTime":"2025-12-03T10:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.715370 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.715420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.715430 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.715443 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.715453 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:29Z","lastTransitionTime":"2025-12-03T10:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.818296 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.818338 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.818348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.818362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.818372 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:29Z","lastTransitionTime":"2025-12-03T10:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.920728 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.920914 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.921007 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.921252 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:29 crc kubenswrapper[4756]: I1203 10:54:29.921271 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:29Z","lastTransitionTime":"2025-12-03T10:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.024623 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.024667 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.024681 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.024699 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.024712 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:30Z","lastTransitionTime":"2025-12-03T10:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.128052 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.128093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.128105 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.128125 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.128137 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:30Z","lastTransitionTime":"2025-12-03T10:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.230788 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.230840 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.230854 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.230872 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.230886 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:30Z","lastTransitionTime":"2025-12-03T10:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.333800 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.333839 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.333850 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.333870 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.333884 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:30Z","lastTransitionTime":"2025-12-03T10:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.437777 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.437877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.437899 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.437934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.438009 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:30Z","lastTransitionTime":"2025-12-03T10:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.542536 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.542617 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.542637 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.542666 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.542686 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:30Z","lastTransitionTime":"2025-12-03T10:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.647723 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.647900 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.647914 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.648479 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.648528 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:30Z","lastTransitionTime":"2025-12-03T10:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.698706 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/2.log" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.703761 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerStarted","Data":"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824"} Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.707381 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.722336 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.734922 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.746019 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"690986b4-18a2-46a1-9630-146ab8b3b313\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ccc45a014b5221a5e22a35b29df5446825770844e7e9b3e8cc01228954ca18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0aa6c1451fb8c8270fcdd28b234cf83cf94b06fe79cd807cb3889b180a40d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0aa6c1451fb8c8270fcdd28b234cf83cf94b06fe79cd807cb3889b180a40d79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.752490 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.752530 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.752540 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.752562 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.752582 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:30Z","lastTransitionTime":"2025-12-03T10:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.759480 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.773384 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.790865 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.807172 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.818157 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.829815 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.848585 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.855112 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.855178 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.855198 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.855229 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.855249 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:30Z","lastTransitionTime":"2025-12-03T10:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.872494 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:16Z\\\",\\\"message\\\":\\\"2025-12-03T10:53:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021\\\\n2025-12-03T10:53:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021 to /host/opt/cni/bin/\\\\n2025-12-03T10:53:31Z [verbose] multus-daemon started\\\\n2025-12-03T10:53:31Z [verbose] Readiness Indicator file check\\\\n2025-12-03T10:54:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.896715 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:59Z\\\",\\\"message\\\":\\\"-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1203 10:53:59.141875 6384 ovnkube.go:599] Stopped ovnkube\\\\nI1203 10:53:59.141885 6384 services_controller.go:452] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141926 6384 services_controller.go:453] Built service openshift-operator-lifecycle-manager/packageserver-service template LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141946 6384 services_controller.go:454] Service openshift-operator-lifecycle-manager/packageserver-service for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1203 10:53:59.142002 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 10:53:59.142067 6384 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.908246 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.917724 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.932340 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.944704 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9df49311-9ea2-411a-9627-695fbd0b6248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc2a0487fcc32cb6cd148976f14df4e7cf8c6e8cb06d7cc8365740484c30b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad98945b1e87deede05a7eda2f9adaddb0b884850dad8a9d6a8d1a5e5df02d12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c0654e2a6bce50e493d1f0119b8ef84de2222f31a901130d30e2acdf8b6fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.957865 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.957923 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.957938 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.957973 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.957989 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:30Z","lastTransitionTime":"2025-12-03T10:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.960027 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:30 crc kubenswrapper[4756]: I1203 10:54:30.973873 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:30Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.060877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.060928 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.060938 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.060967 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.060980 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:31Z","lastTransitionTime":"2025-12-03T10:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.162815 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.162862 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.162874 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.162892 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.162904 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:31Z","lastTransitionTime":"2025-12-03T10:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.233017 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.233067 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.233145 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.233219 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:31 crc kubenswrapper[4756]: E1203 10:54:31.233315 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:31 crc kubenswrapper[4756]: E1203 10:54:31.233204 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:31 crc kubenswrapper[4756]: E1203 10:54:31.233399 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:31 crc kubenswrapper[4756]: E1203 10:54:31.233454 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.265882 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.265927 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.265935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.265966 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.265984 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:31Z","lastTransitionTime":"2025-12-03T10:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.402614 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.402653 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.402662 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.402676 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.402688 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:31Z","lastTransitionTime":"2025-12-03T10:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.504877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.504922 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.504934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.504971 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.504987 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:31Z","lastTransitionTime":"2025-12-03T10:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.607481 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.607519 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.607528 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.607543 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.607555 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:31Z","lastTransitionTime":"2025-12-03T10:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.714106 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.714150 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.714162 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.714176 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.714187 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:31Z","lastTransitionTime":"2025-12-03T10:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.816862 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.816929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.816947 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.817003 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.817035 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:31Z","lastTransitionTime":"2025-12-03T10:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.920514 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.920562 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.920571 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.920587 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:31 crc kubenswrapper[4756]: I1203 10:54:31.920600 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:31Z","lastTransitionTime":"2025-12-03T10:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.023307 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.023358 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.023373 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.023395 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.023410 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:32Z","lastTransitionTime":"2025-12-03T10:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.126149 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.126201 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.126213 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.126232 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.126246 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:32Z","lastTransitionTime":"2025-12-03T10:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.229382 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.229439 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.229452 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.229469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.229481 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:32Z","lastTransitionTime":"2025-12-03T10:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.333160 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.333705 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.333986 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.334164 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.334317 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:32Z","lastTransitionTime":"2025-12-03T10:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.438889 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.439535 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.439645 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.439717 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.439784 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:32Z","lastTransitionTime":"2025-12-03T10:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.542805 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.543141 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.543213 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.543275 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.543332 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:32Z","lastTransitionTime":"2025-12-03T10:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.650920 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.650990 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.651003 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.651020 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.651035 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:32Z","lastTransitionTime":"2025-12-03T10:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.719507 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/3.log" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.720381 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/2.log" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.724288 4756 generic.go:334] "Generic (PLEG): container finished" podID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerID="246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824" exitCode=1 Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.724340 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerDied","Data":"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824"} Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.724393 4756 scope.go:117] "RemoveContainer" containerID="1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.725350 4756 scope.go:117] "RemoveContainer" containerID="246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824" Dec 03 10:54:32 crc kubenswrapper[4756]: E1203 10:54:32.725563 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.753470 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.753914 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.754096 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.754269 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.754454 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:32Z","lastTransitionTime":"2025-12-03T10:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.761599 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.783778 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9df49311-9ea2-411a-9627-695fbd0b6248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc2a0487fcc32cb6cd148976f14df4e7cf8c6e8cb06d7cc8365740484c30b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad98945b1e87deede05a7eda2f9adaddb0b884850dad8a9d6a8d1a5e5df02d12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c0654e2a6bce50e493d1f0119b8ef84de2222f31a901130d30e2acdf8b6fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.801983 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.820664 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.837736 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.854868 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.858152 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.858205 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.858224 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.858247 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.858262 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:32Z","lastTransitionTime":"2025-12-03T10:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.869355 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"690986b4-18a2-46a1-9630-146ab8b3b313\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ccc45a014b5221a5e22a35b29df5446825770844e7e9b3e8cc01228954ca18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0aa6c1451fb8c8270fcdd28b234cf83cf94b06fe79cd807cb3889b180a40d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0aa6c1451fb8c8270fcdd28b234cf83cf94b06fe79cd807cb3889b180a40d79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.886219 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.908894 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.929396 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.945343 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.960730 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.960912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.961029 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.961132 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.961213 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:32Z","lastTransitionTime":"2025-12-03T10:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.965642 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:32 crc kubenswrapper[4756]: I1203 10:54:32.988448 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:32Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.007256 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.031381 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.047467 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:16Z\\\",\\\"message\\\":\\\"2025-12-03T10:53:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021\\\\n2025-12-03T10:53:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021 to /host/opt/cni/bin/\\\\n2025-12-03T10:53:31Z [verbose] multus-daemon started\\\\n2025-12-03T10:53:31Z [verbose] Readiness Indicator file check\\\\n2025-12-03T10:54:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.053376 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.053518 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.053495232 +0000 UTC m=+148.083496486 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.053646 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.053675 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.053791 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.053844 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.053834573 +0000 UTC m=+148.083835837 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.053885 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.053993 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.053931586 +0000 UTC m=+148.083932870 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.065751 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.065811 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.065826 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.065851 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.065867 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:33Z","lastTransitionTime":"2025-12-03T10:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.069447 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:59Z\\\",\\\"message\\\":\\\"-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1203 10:53:59.141875 6384 ovnkube.go:599] Stopped ovnkube\\\\nI1203 10:53:59.141885 6384 services_controller.go:452] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141926 6384 services_controller.go:453] Built service openshift-operator-lifecycle-manager/packageserver-service template LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141946 6384 services_controller.go:454] Service openshift-operator-lifecycle-manager/packageserver-service for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1203 10:53:59.142002 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 10:53:59.142067 6384 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:31Z\\\",\\\"message\\\":\\\"n-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 10:54:31.474687 6782 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 10:54:31.474709 6782 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 10:54:31.474737 6782 factory.go:656] Stopping watch factory\\\\nI1203 10:54:31.474762 6782 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 10:54:31.474772 6782 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 10:54:31.474848 6782 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 10:54:31.474922 6782 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475020 6782 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475089 6782 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475302 6782 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475415 6782 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.084717 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.155002 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.155049 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.155187 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.155212 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.155225 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.155270 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.155255345 +0000 UTC m=+148.185256589 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.155187 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.155311 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.155323 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.155354 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.155344658 +0000 UTC m=+148.185345902 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.168642 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.168739 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.168761 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.168791 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.168807 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:33Z","lastTransitionTime":"2025-12-03T10:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.233836 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.233999 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.234212 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.234304 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.234424 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.234505 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.235043 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.235338 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.271452 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.271487 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.271498 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.271515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.271524 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:33Z","lastTransitionTime":"2025-12-03T10:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.375110 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.375668 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.375681 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.375704 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.375718 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:33Z","lastTransitionTime":"2025-12-03T10:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.478263 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.478348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.478370 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.478401 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.478424 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:33Z","lastTransitionTime":"2025-12-03T10:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.582609 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.583166 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.583318 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.583508 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.583649 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:33Z","lastTransitionTime":"2025-12-03T10:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.694426 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.694919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.695308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.695456 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.695552 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:33Z","lastTransitionTime":"2025-12-03T10:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.729509 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/3.log" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.798243 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.798306 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.798319 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.798336 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.798348 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:33Z","lastTransitionTime":"2025-12-03T10:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.828018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.828062 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.828074 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.828091 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.828106 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:33Z","lastTransitionTime":"2025-12-03T10:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.847578 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.852382 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.852453 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.852472 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.852500 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.852520 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:33Z","lastTransitionTime":"2025-12-03T10:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.869708 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.876023 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.876204 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.876268 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.876335 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.876402 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:33Z","lastTransitionTime":"2025-12-03T10:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.892326 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.896567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.896837 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.896901 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.897006 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.897068 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:33Z","lastTransitionTime":"2025-12-03T10:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.911846 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.916795 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.916825 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.916834 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.916848 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.916858 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:33Z","lastTransitionTime":"2025-12-03T10:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.933025 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:33Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:33 crc kubenswrapper[4756]: E1203 10:54:33.933363 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.935328 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.935439 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.935519 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.935599 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:33 crc kubenswrapper[4756]: I1203 10:54:33.935664 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:33Z","lastTransitionTime":"2025-12-03T10:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.037872 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.038215 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.038299 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.038439 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.038535 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:34Z","lastTransitionTime":"2025-12-03T10:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.142033 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.142369 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.142455 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.142575 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.142688 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:34Z","lastTransitionTime":"2025-12-03T10:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.248595 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.248934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.249074 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.249149 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.249215 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:34Z","lastTransitionTime":"2025-12-03T10:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.352004 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.352046 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.352055 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.352071 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.352081 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:34Z","lastTransitionTime":"2025-12-03T10:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.454656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.455199 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.455385 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.455530 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.455661 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:34Z","lastTransitionTime":"2025-12-03T10:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.558028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.559016 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.559143 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.559410 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.559486 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:34Z","lastTransitionTime":"2025-12-03T10:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.663301 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.663350 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.663359 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.663374 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.663384 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:34Z","lastTransitionTime":"2025-12-03T10:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.765685 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.765735 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.765750 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.765771 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.765786 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:34Z","lastTransitionTime":"2025-12-03T10:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.868691 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.868728 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.868737 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.868752 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.868763 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:34Z","lastTransitionTime":"2025-12-03T10:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.972009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.972075 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.972091 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.972109 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:34 crc kubenswrapper[4756]: I1203 10:54:34.972120 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:34Z","lastTransitionTime":"2025-12-03T10:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.074327 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.074393 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.074406 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.074422 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.074431 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:35Z","lastTransitionTime":"2025-12-03T10:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.190101 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.190152 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.190163 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.190178 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.190187 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:35Z","lastTransitionTime":"2025-12-03T10:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.233696 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.233725 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.233812 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.233858 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:35 crc kubenswrapper[4756]: E1203 10:54:35.233846 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:35 crc kubenswrapper[4756]: E1203 10:54:35.234131 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:35 crc kubenswrapper[4756]: E1203 10:54:35.234122 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:35 crc kubenswrapper[4756]: E1203 10:54:35.234354 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.293660 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.293723 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.293735 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.293762 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.293778 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:35Z","lastTransitionTime":"2025-12-03T10:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.396841 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.396900 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.396916 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.396938 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.396975 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:35Z","lastTransitionTime":"2025-12-03T10:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.500977 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.501036 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.501048 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.501071 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.501084 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:35Z","lastTransitionTime":"2025-12-03T10:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.605180 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.605274 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.605308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.605348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.605381 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:35Z","lastTransitionTime":"2025-12-03T10:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.708436 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.708500 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.708517 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.708546 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.708561 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:35Z","lastTransitionTime":"2025-12-03T10:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.811909 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.811988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.812004 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.812035 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.812055 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:35Z","lastTransitionTime":"2025-12-03T10:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.915567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.915670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.915688 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.915717 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:35 crc kubenswrapper[4756]: I1203 10:54:35.915738 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:35Z","lastTransitionTime":"2025-12-03T10:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.019717 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.019779 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.019792 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.019827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.019842 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:36Z","lastTransitionTime":"2025-12-03T10:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.122778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.122860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.122883 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.122910 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.122928 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:36Z","lastTransitionTime":"2025-12-03T10:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.226237 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.226308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.226331 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.226365 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.226385 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:36Z","lastTransitionTime":"2025-12-03T10:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.329296 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.329348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.329367 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.329388 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.329401 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:36Z","lastTransitionTime":"2025-12-03T10:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.432605 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.432680 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.432701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.432732 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.432752 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:36Z","lastTransitionTime":"2025-12-03T10:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.536486 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.536535 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.536548 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.536567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.536600 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:36Z","lastTransitionTime":"2025-12-03T10:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.640516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.640571 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.640579 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.640595 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.640607 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:36Z","lastTransitionTime":"2025-12-03T10:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.744032 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.744097 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.744111 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.744134 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.744153 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:36Z","lastTransitionTime":"2025-12-03T10:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.848525 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.848613 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.848638 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.848670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.848701 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:36Z","lastTransitionTime":"2025-12-03T10:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.962122 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.962246 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.962278 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.962307 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:36 crc kubenswrapper[4756]: I1203 10:54:36.962329 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:36Z","lastTransitionTime":"2025-12-03T10:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.066760 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.066859 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.066885 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.066919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.066941 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:37Z","lastTransitionTime":"2025-12-03T10:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.170856 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.171008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.171045 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.171076 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.171097 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:37Z","lastTransitionTime":"2025-12-03T10:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.233193 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.233203 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.233236 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.233243 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:37 crc kubenswrapper[4756]: E1203 10:54:37.233775 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:37 crc kubenswrapper[4756]: E1203 10:54:37.233879 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:37 crc kubenswrapper[4756]: E1203 10:54:37.234054 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:37 crc kubenswrapper[4756]: E1203 10:54:37.234260 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.257454 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.274143 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.274217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.274238 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.274265 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.274286 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:37Z","lastTransitionTime":"2025-12-03T10:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.378473 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.378788 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.378940 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.379090 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.379194 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:37Z","lastTransitionTime":"2025-12-03T10:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.482924 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.482999 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.483014 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.483034 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.483050 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:37Z","lastTransitionTime":"2025-12-03T10:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.585793 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.585829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.585839 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.585856 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.585867 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:37Z","lastTransitionTime":"2025-12-03T10:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.688780 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.688835 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.688849 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.688868 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.688884 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:37Z","lastTransitionTime":"2025-12-03T10:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.791614 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.791681 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.791695 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.791722 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.791740 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:37Z","lastTransitionTime":"2025-12-03T10:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.894892 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.894927 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.894938 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.894978 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.894989 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:37Z","lastTransitionTime":"2025-12-03T10:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.999344 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.999420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.999444 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.999473 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:37 crc kubenswrapper[4756]: I1203 10:54:37.999492 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:37Z","lastTransitionTime":"2025-12-03T10:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.103093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.103169 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.103188 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.103215 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.103234 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:38Z","lastTransitionTime":"2025-12-03T10:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.207714 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.207766 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.207777 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.207794 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.207804 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:38Z","lastTransitionTime":"2025-12-03T10:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.310525 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.310580 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.310593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.310619 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.310632 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:38Z","lastTransitionTime":"2025-12-03T10:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.413454 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.413504 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.413515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.413534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.413547 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:38Z","lastTransitionTime":"2025-12-03T10:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.515480 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.515521 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.515532 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.515556 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.515570 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:38Z","lastTransitionTime":"2025-12-03T10:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.618354 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.618400 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.618413 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.618430 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.618440 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:38Z","lastTransitionTime":"2025-12-03T10:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.721117 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.721185 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.721207 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.721233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.721251 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:38Z","lastTransitionTime":"2025-12-03T10:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.824941 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.825042 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.825062 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.825094 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.825114 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:38Z","lastTransitionTime":"2025-12-03T10:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.928593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.928655 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.928692 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.928724 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:38 crc kubenswrapper[4756]: I1203 10:54:38.928749 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:38Z","lastTransitionTime":"2025-12-03T10:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.031587 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.031635 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.031647 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.031666 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.031679 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:39Z","lastTransitionTime":"2025-12-03T10:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.135449 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.135495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.135513 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.135536 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.135550 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:39Z","lastTransitionTime":"2025-12-03T10:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.232881 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.232946 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.233129 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:39 crc kubenswrapper[4756]: E1203 10:54:39.233123 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.233196 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:39 crc kubenswrapper[4756]: E1203 10:54:39.233361 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:39 crc kubenswrapper[4756]: E1203 10:54:39.233513 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:39 crc kubenswrapper[4756]: E1203 10:54:39.233673 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.238983 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.239033 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.239054 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.239118 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.239138 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:39Z","lastTransitionTime":"2025-12-03T10:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.251425 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.277623 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.296091 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9df49311-9ea2-411a-9627-695fbd0b6248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc2a0487fcc32cb6cd148976f14df4e7cf8c6e8cb06d7cc8365740484c30b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad98945b1e87deede05a7eda2f9adaddb0b884850dad8a9d6a8d1a5e5df02d12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c0654e2a6bce50e493d1f0119b8ef84de2222f31a901130d30e2acdf8b6fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.313071 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.330593 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.343947 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.344060 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.344071 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.344091 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.344103 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:39Z","lastTransitionTime":"2025-12-03T10:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.351105 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.365661 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.381494 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"690986b4-18a2-46a1-9630-146ab8b3b313\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ccc45a014b5221a5e22a35b29df5446825770844e7e9b3e8cc01228954ca18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0aa6c1451fb8c8270fcdd28b234cf83cf94b06fe79cd807cb3889b180a40d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0aa6c1451fb8c8270fcdd28b234cf83cf94b06fe79cd807cb3889b180a40d79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.418754 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65bd3423-2e63-4afa-90a9-4db4c36fa9ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00447ddce8e8cd04c24e8c4c48311968ce8c53aaed2913aa32c5b3673b9dc658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca64c7bcb4c04c7412d3d79d70ece2b27f16d0343fe0d0ec069771f734a67ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b934076ce1b273fa132a066e5641af99b6e0b62f002bc20f10f0bfa4f2c1f7e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011be6cdd5ebe67b7b26f6d81922abed2cfedd5c2256a7338662869c1128ca35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88faa812e4c215b5d08c639d100ec61f8c5867bf2e88a2954e4156f53b3257d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a1973df1495d137d0f43b666da34eead1b891a539390ffc04d031acee490a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3a1973df1495d137d0f43b666da34eead1b891a539390ffc04d031acee490a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9e927be605b83bb7053c54055470bf0c6eabdc296f27263a145b2d0a78e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94d9e927be605b83bb7053c54055470bf0c6eabdc296f27263a145b2d0a78e28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd85aa7d94ce212503e1459b05b6681863198dcd6c7440bf16dca1ed3c9d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd85aa7d94ce212503e1459b05b6681863198dcd6c7440bf16dca1ed3c9d42b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.436912 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.446996 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.447072 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.447086 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.447102 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.447113 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:39Z","lastTransitionTime":"2025-12-03T10:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.455589 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.469484 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.487742 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.503220 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.517086 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.533608 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.548938 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:16Z\\\",\\\"message\\\":\\\"2025-12-03T10:53:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021\\\\n2025-12-03T10:53:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021 to /host/opt/cni/bin/\\\\n2025-12-03T10:53:31Z [verbose] multus-daemon started\\\\n2025-12-03T10:53:31Z [verbose] Readiness Indicator file check\\\\n2025-12-03T10:54:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.550096 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.550136 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.550149 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.550168 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.550182 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:39Z","lastTransitionTime":"2025-12-03T10:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.570805 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f7b0e8b21853fa3760d279865513c5cd07fad8224abb833349c8b7390b83bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:53:59Z\\\",\\\"message\\\":\\\"-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1203 10:53:59.141875 6384 ovnkube.go:599] Stopped ovnkube\\\\nI1203 10:53:59.141885 6384 services_controller.go:452] Built service openshift-operator-lifecycle-manager/packageserver-service per-node LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141926 6384 services_controller.go:453] Built service openshift-operator-lifecycle-manager/packageserver-service template LB for network=default: []services.LB{}\\\\nI1203 10:53:59.141946 6384 services_controller.go:454] Service openshift-operator-lifecycle-manager/packageserver-service for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1203 10:53:59.142002 6384 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 10:53:59.142067 6384 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:31Z\\\",\\\"message\\\":\\\"n-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 10:54:31.474687 6782 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 10:54:31.474709 6782 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 10:54:31.474737 6782 factory.go:656] Stopping watch factory\\\\nI1203 10:54:31.474762 6782 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 10:54:31.474772 6782 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 10:54:31.474848 6782 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 10:54:31.474922 6782 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475020 6782 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475089 6782 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475302 6782 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475415 6782 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.585088 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:39Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.652751 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.652808 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.652820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.652836 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.652849 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:39Z","lastTransitionTime":"2025-12-03T10:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.756233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.756303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.756319 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.756627 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.756664 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:39Z","lastTransitionTime":"2025-12-03T10:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.860008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.860046 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.860059 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.860077 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.860088 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:39Z","lastTransitionTime":"2025-12-03T10:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.962709 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.962781 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.962799 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.962828 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:39 crc kubenswrapper[4756]: I1203 10:54:39.962847 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:39Z","lastTransitionTime":"2025-12-03T10:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.065627 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.065668 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.065676 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.065694 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.065705 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:40Z","lastTransitionTime":"2025-12-03T10:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.168605 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.168690 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.168709 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.168737 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.168756 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:40Z","lastTransitionTime":"2025-12-03T10:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.272052 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.272137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.272160 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.272196 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.272222 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:40Z","lastTransitionTime":"2025-12-03T10:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.374837 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.374935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.374971 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.374994 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.375008 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:40Z","lastTransitionTime":"2025-12-03T10:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.478021 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.478081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.478093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.478114 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.478132 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:40Z","lastTransitionTime":"2025-12-03T10:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.580660 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.580719 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.580735 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.580754 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.580769 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:40Z","lastTransitionTime":"2025-12-03T10:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.683749 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.683818 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.683841 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.683874 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.683899 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:40Z","lastTransitionTime":"2025-12-03T10:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.786826 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.786897 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.786911 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.786928 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.786942 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:40Z","lastTransitionTime":"2025-12-03T10:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.889785 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.889846 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.889861 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.889886 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.889905 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:40Z","lastTransitionTime":"2025-12-03T10:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.993461 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.993529 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.993543 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.993566 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:40 crc kubenswrapper[4756]: I1203 10:54:40.993579 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:40Z","lastTransitionTime":"2025-12-03T10:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.096117 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.096592 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.096755 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.096877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.097015 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:41Z","lastTransitionTime":"2025-12-03T10:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.199427 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.199822 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.199902 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.200023 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.200122 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:41Z","lastTransitionTime":"2025-12-03T10:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.233902 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.234001 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.234065 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.234090 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:41 crc kubenswrapper[4756]: E1203 10:54:41.234353 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:41 crc kubenswrapper[4756]: E1203 10:54:41.234580 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:41 crc kubenswrapper[4756]: E1203 10:54:41.234734 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:41 crc kubenswrapper[4756]: E1203 10:54:41.234805 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.303431 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.303488 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.303500 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.303519 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.303531 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:41Z","lastTransitionTime":"2025-12-03T10:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.406492 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.406567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.406580 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.406601 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.406615 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:41Z","lastTransitionTime":"2025-12-03T10:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.510815 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.510929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.510969 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.511001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.511022 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:41Z","lastTransitionTime":"2025-12-03T10:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.615205 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.615283 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.615303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.615333 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.615355 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:41Z","lastTransitionTime":"2025-12-03T10:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.718756 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.718798 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.718811 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.718907 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.718935 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:41Z","lastTransitionTime":"2025-12-03T10:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.822303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.822383 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.822401 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.822423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.822442 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:41Z","lastTransitionTime":"2025-12-03T10:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.925348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.925517 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.925531 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.925550 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:41 crc kubenswrapper[4756]: I1203 10:54:41.925566 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:41Z","lastTransitionTime":"2025-12-03T10:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.028174 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.028234 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.028245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.028267 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.028280 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:42Z","lastTransitionTime":"2025-12-03T10:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.132605 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.132660 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.132671 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.132693 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.132705 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:42Z","lastTransitionTime":"2025-12-03T10:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.236281 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.236358 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.236376 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.236398 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.236416 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:42Z","lastTransitionTime":"2025-12-03T10:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.339695 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.339806 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.339832 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.339868 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.339891 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:42Z","lastTransitionTime":"2025-12-03T10:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.443252 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.443357 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.443380 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.443423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.443445 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:42Z","lastTransitionTime":"2025-12-03T10:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.546822 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.546905 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.546929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.546998 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.547024 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:42Z","lastTransitionTime":"2025-12-03T10:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.651157 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.651234 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.651253 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.651280 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.651301 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:42Z","lastTransitionTime":"2025-12-03T10:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.755099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.755167 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.755206 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.755244 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.755283 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:42Z","lastTransitionTime":"2025-12-03T10:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.858833 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.858896 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.858911 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.858932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.858970 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:42Z","lastTransitionTime":"2025-12-03T10:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.962008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.962109 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.962133 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.962169 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:42 crc kubenswrapper[4756]: I1203 10:54:42.962190 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:42Z","lastTransitionTime":"2025-12-03T10:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.065884 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.065988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.066018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.066060 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.066079 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:43Z","lastTransitionTime":"2025-12-03T10:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.170246 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.170309 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.170328 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.170354 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.170372 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:43Z","lastTransitionTime":"2025-12-03T10:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.233584 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.233658 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.233706 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:43 crc kubenswrapper[4756]: E1203 10:54:43.233766 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.233706 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:43 crc kubenswrapper[4756]: E1203 10:54:43.233875 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:43 crc kubenswrapper[4756]: E1203 10:54:43.234112 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:43 crc kubenswrapper[4756]: E1203 10:54:43.234207 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.274270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.274338 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.274356 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.274390 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.274412 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:43Z","lastTransitionTime":"2025-12-03T10:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.378155 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.378223 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.378233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.378251 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.378263 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:43Z","lastTransitionTime":"2025-12-03T10:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.482002 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.482065 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.482083 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.482105 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.482122 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:43Z","lastTransitionTime":"2025-12-03T10:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.594050 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.594157 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.594188 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.594225 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.594249 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:43Z","lastTransitionTime":"2025-12-03T10:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.698217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.698294 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.698313 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.698343 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.698364 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:43Z","lastTransitionTime":"2025-12-03T10:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.802195 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.802270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.802285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.802306 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.802321 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:43Z","lastTransitionTime":"2025-12-03T10:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.905262 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.905348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.905367 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.905405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:43 crc kubenswrapper[4756]: I1203 10:54:43.905426 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:43Z","lastTransitionTime":"2025-12-03T10:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.001091 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.001163 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.001182 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.001212 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.001232 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:44Z","lastTransitionTime":"2025-12-03T10:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:44 crc kubenswrapper[4756]: E1203 10:54:44.019316 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.023683 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.023765 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.023790 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.023821 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.023844 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:44Z","lastTransitionTime":"2025-12-03T10:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:44 crc kubenswrapper[4756]: E1203 10:54:44.038398 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.042738 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.042814 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.042833 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.042853 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.042870 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:44Z","lastTransitionTime":"2025-12-03T10:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:44 crc kubenswrapper[4756]: E1203 10:54:44.055726 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.060005 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.060051 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.060069 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.060095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.060114 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:44Z","lastTransitionTime":"2025-12-03T10:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:44 crc kubenswrapper[4756]: E1203 10:54:44.075340 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.080870 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.080934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.080948 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.080993 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.081008 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:44Z","lastTransitionTime":"2025-12-03T10:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:44 crc kubenswrapper[4756]: E1203 10:54:44.093771 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:44Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:44 crc kubenswrapper[4756]: E1203 10:54:44.094135 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.097706 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.097761 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.097779 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.097804 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.097823 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:44Z","lastTransitionTime":"2025-12-03T10:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.201670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.201726 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.201740 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.201761 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.201776 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:44Z","lastTransitionTime":"2025-12-03T10:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.305414 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.305483 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.305501 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.305527 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.305547 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:44Z","lastTransitionTime":"2025-12-03T10:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.408253 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.408317 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.408333 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.408358 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.408374 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:44Z","lastTransitionTime":"2025-12-03T10:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.511719 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.511794 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.511807 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.511828 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.511841 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:44Z","lastTransitionTime":"2025-12-03T10:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.614318 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.614406 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.614431 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.614466 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.614492 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:44Z","lastTransitionTime":"2025-12-03T10:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.717672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.717723 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.717732 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.717750 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.717762 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:44Z","lastTransitionTime":"2025-12-03T10:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.820717 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.820772 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.820793 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.820818 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.820832 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:44Z","lastTransitionTime":"2025-12-03T10:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.924056 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.924103 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.924114 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.924133 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:44 crc kubenswrapper[4756]: I1203 10:54:44.924151 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:44Z","lastTransitionTime":"2025-12-03T10:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.027996 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.028095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.028110 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.028132 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.028147 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:45Z","lastTransitionTime":"2025-12-03T10:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.131226 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.131286 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.131303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.131331 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.131348 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:45Z","lastTransitionTime":"2025-12-03T10:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.233061 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.233093 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:45 crc kubenswrapper[4756]: E1203 10:54:45.233555 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.233630 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.233661 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:45 crc kubenswrapper[4756]: E1203 10:54:45.233834 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.234074 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.234101 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.234120 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.234150 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.234209 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:45Z","lastTransitionTime":"2025-12-03T10:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:45 crc kubenswrapper[4756]: E1203 10:54:45.234326 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:45 crc kubenswrapper[4756]: E1203 10:54:45.234504 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.345868 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.345936 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.345946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.345992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.346005 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:45Z","lastTransitionTime":"2025-12-03T10:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.449558 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.449633 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.449650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.449674 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.449692 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:45Z","lastTransitionTime":"2025-12-03T10:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.552424 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.552487 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.552504 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.552527 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.552544 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:45Z","lastTransitionTime":"2025-12-03T10:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.656599 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.656851 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.656866 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.656887 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.656902 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:45Z","lastTransitionTime":"2025-12-03T10:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.759757 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.759813 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.759827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.759845 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.759857 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:45Z","lastTransitionTime":"2025-12-03T10:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.863198 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.863259 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.863270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.863293 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.863305 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:45Z","lastTransitionTime":"2025-12-03T10:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.966478 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.966524 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.966532 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.966546 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:45 crc kubenswrapper[4756]: I1203 10:54:45.966557 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:45Z","lastTransitionTime":"2025-12-03T10:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.070702 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.070760 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.070778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.070805 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.070824 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:46Z","lastTransitionTime":"2025-12-03T10:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.174423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.174468 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.174478 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.174497 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.174509 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:46Z","lastTransitionTime":"2025-12-03T10:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.234236 4756 scope.go:117] "RemoveContainer" containerID="246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824" Dec 03 10:54:46 crc kubenswrapper[4756]: E1203 10:54:46.234452 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.248665 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.267236 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.278790 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.278869 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.278912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.278949 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.279010 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:46Z","lastTransitionTime":"2025-12-03T10:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.286363 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9df49311-9ea2-411a-9627-695fbd0b6248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc2a0487fcc32cb6cd148976f14df4e7cf8c6e8cb06d7cc8365740484c30b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad98945b1e87deede05a7eda2f9adaddb0b884850dad8a9d6a8d1a5e5df02d12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c0654e2a6bce50e493d1f0119b8ef84de2222f31a901130d30e2acdf8b6fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.304335 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.336162 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.355926 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.371694 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.382426 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.382459 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.382469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.382483 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.382498 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:46Z","lastTransitionTime":"2025-12-03T10:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.388183 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"690986b4-18a2-46a1-9630-146ab8b3b313\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ccc45a014b5221a5e22a35b29df5446825770844e7e9b3e8cc01228954ca18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0aa6c1451fb8c8270fcdd28b234cf83cf94b06fe79cd807cb3889b180a40d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0aa6c1451fb8c8270fcdd28b234cf83cf94b06fe79cd807cb3889b180a40d79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.421179 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65bd3423-2e63-4afa-90a9-4db4c36fa9ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00447ddce8e8cd04c24e8c4c48311968ce8c53aaed2913aa32c5b3673b9dc658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca64c7bcb4c04c7412d3d79d70ece2b27f16d0343fe0d0ec069771f734a67ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b934076ce1b273fa132a066e5641af99b6e0b62f002bc20f10f0bfa4f2c1f7e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011be6cdd5ebe67b7b26f6d81922abed2cfedd5c2256a7338662869c1128ca35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88faa812e4c215b5d08c639d100ec61f8c5867bf2e88a2954e4156f53b3257d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a1973df1495d137d0f43b666da34eead1b891a539390ffc04d031acee490a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3a1973df1495d137d0f43b666da34eead1b891a539390ffc04d031acee490a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9e927be605b83bb7053c54055470bf0c6eabdc296f27263a145b2d0a78e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94d9e927be605b83bb7053c54055470bf0c6eabdc296f27263a145b2d0a78e28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd85aa7d94ce212503e1459b05b6681863198dcd6c7440bf16dca1ed3c9d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd85aa7d94ce212503e1459b05b6681863198dcd6c7440bf16dca1ed3c9d42b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.438344 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.461481 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.485450 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.485498 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.485510 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.485530 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.485542 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:46Z","lastTransitionTime":"2025-12-03T10:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.489630 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.509537 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.529015 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.546716 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.566935 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.588822 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.589136 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.589219 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.589288 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.589355 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:46Z","lastTransitionTime":"2025-12-03T10:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.592322 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:16Z\\\",\\\"message\\\":\\\"2025-12-03T10:53:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021\\\\n2025-12-03T10:53:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021 to /host/opt/cni/bin/\\\\n2025-12-03T10:53:31Z [verbose] multus-daemon started\\\\n2025-12-03T10:53:31Z [verbose] Readiness Indicator file check\\\\n2025-12-03T10:54:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.615884 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:31Z\\\",\\\"message\\\":\\\"n-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 10:54:31.474687 6782 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 10:54:31.474709 6782 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 10:54:31.474737 6782 factory.go:656] Stopping watch factory\\\\nI1203 10:54:31.474762 6782 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 10:54:31.474772 6782 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 10:54:31.474848 6782 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 10:54:31.474922 6782 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475020 6782 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475089 6782 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475302 6782 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475415 6782 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.630660 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:46Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.692225 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.692333 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.692347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.692391 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.692404 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:46Z","lastTransitionTime":"2025-12-03T10:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.795509 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.795592 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.795615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.795641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.795659 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:46Z","lastTransitionTime":"2025-12-03T10:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.898756 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.898826 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.898840 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.898864 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:46 crc kubenswrapper[4756]: I1203 10:54:46.898882 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:46Z","lastTransitionTime":"2025-12-03T10:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.002089 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.002147 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.002163 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.002184 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.002277 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:47Z","lastTransitionTime":"2025-12-03T10:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.106994 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.107062 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.107078 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.107101 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.107122 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:47Z","lastTransitionTime":"2025-12-03T10:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.210574 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.210639 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.210658 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.210688 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.210708 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:47Z","lastTransitionTime":"2025-12-03T10:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.233228 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.233237 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.233296 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.233306 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:47 crc kubenswrapper[4756]: E1203 10:54:47.233446 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:47 crc kubenswrapper[4756]: E1203 10:54:47.233521 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:47 crc kubenswrapper[4756]: E1203 10:54:47.233618 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:47 crc kubenswrapper[4756]: E1203 10:54:47.233755 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.314230 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.314615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.314769 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.314922 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.315120 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:47Z","lastTransitionTime":"2025-12-03T10:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.419257 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.419913 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.420218 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.420427 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.420631 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:47Z","lastTransitionTime":"2025-12-03T10:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.525025 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.525108 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.525136 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.525173 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.525205 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:47Z","lastTransitionTime":"2025-12-03T10:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.628633 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.628699 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.628716 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.628739 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.628755 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:47Z","lastTransitionTime":"2025-12-03T10:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.736027 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.736064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.736075 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.736092 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.736103 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:47Z","lastTransitionTime":"2025-12-03T10:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.839617 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.839675 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.839686 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.839712 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.839725 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:47Z","lastTransitionTime":"2025-12-03T10:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.849615 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs\") pod \"network-metrics-daemon-qvt7n\" (UID: \"cd88c3db-a819-4fb9-a952-30dc1b67c375\") " pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:47 crc kubenswrapper[4756]: E1203 10:54:47.849814 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:54:47 crc kubenswrapper[4756]: E1203 10:54:47.849880 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs podName:cd88c3db-a819-4fb9-a952-30dc1b67c375 nodeName:}" failed. No retries permitted until 2025-12-03 10:55:51.849858839 +0000 UTC m=+162.879860094 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs") pod "network-metrics-daemon-qvt7n" (UID: "cd88c3db-a819-4fb9-a952-30dc1b67c375") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.943347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.943546 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.943603 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.943641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:47 crc kubenswrapper[4756]: I1203 10:54:47.943670 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:47Z","lastTransitionTime":"2025-12-03T10:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.046452 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.046519 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.046538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.046557 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.046569 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:48Z","lastTransitionTime":"2025-12-03T10:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.149581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.149633 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.149644 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.149662 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.149677 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:48Z","lastTransitionTime":"2025-12-03T10:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.252679 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.252754 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.252772 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.252806 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.252826 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:48Z","lastTransitionTime":"2025-12-03T10:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.355923 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.356022 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.356044 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.356071 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.356088 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:48Z","lastTransitionTime":"2025-12-03T10:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.458919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.459005 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.459019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.459035 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.459046 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:48Z","lastTransitionTime":"2025-12-03T10:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.561174 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.561222 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.561231 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.561246 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.561256 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:48Z","lastTransitionTime":"2025-12-03T10:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.665100 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.665173 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.665185 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.665207 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.665222 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:48Z","lastTransitionTime":"2025-12-03T10:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.768932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.769064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.769090 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.769117 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.769135 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:48Z","lastTransitionTime":"2025-12-03T10:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.874916 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.875082 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.875107 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.875132 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.875148 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:48Z","lastTransitionTime":"2025-12-03T10:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.978669 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.978746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.978774 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.978813 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:48 crc kubenswrapper[4756]: I1203 10:54:48.978837 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:48Z","lastTransitionTime":"2025-12-03T10:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.082217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.082295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.082310 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.082332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.082356 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:49Z","lastTransitionTime":"2025-12-03T10:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.185908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.185998 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.186013 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.186034 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.186052 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:49Z","lastTransitionTime":"2025-12-03T10:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.233511 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.233648 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.233733 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.233585 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:49 crc kubenswrapper[4756]: E1203 10:54:49.233840 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:49 crc kubenswrapper[4756]: E1203 10:54:49.234160 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:49 crc kubenswrapper[4756]: E1203 10:54:49.234348 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:49 crc kubenswrapper[4756]: E1203 10:54:49.234787 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.254870 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.274771 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.290145 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.290229 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.290248 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.290277 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.290297 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:49Z","lastTransitionTime":"2025-12-03T10:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.300591 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.319861 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.338232 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:16Z\\\",\\\"message\\\":\\\"2025-12-03T10:53:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021\\\\n2025-12-03T10:53:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021 to /host/opt/cni/bin/\\\\n2025-12-03T10:53:31Z [verbose] multus-daemon started\\\\n2025-12-03T10:53:31Z [verbose] Readiness Indicator file check\\\\n2025-12-03T10:54:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.360219 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:31Z\\\",\\\"message\\\":\\\"n-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 10:54:31.474687 6782 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 10:54:31.474709 6782 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 10:54:31.474737 6782 factory.go:656] Stopping watch factory\\\\nI1203 10:54:31.474762 6782 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 10:54:31.474772 6782 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 10:54:31.474848 6782 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 10:54:31.474922 6782 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475020 6782 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475089 6782 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475302 6782 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475415 6782 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.374505 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.395446 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2e21b3a-5bdf-47a2-9d78-4614ec42ca25\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 10:53:22.746577 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 10:53:22.747795 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1184710852/tls.crt::/tmp/serving-cert-1184710852/tls.key\\\\\\\"\\\\nI1203 10:53:28.206465 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 10:53:28.208779 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 10:53:28.208803 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 10:53:28.208829 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 10:53:28.208835 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 10:53:28.217116 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 10:53:28.217147 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217152 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 10:53:28.217161 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 10:53:28.217164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 10:53:28.217167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 10:53:28.217170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 10:53:28.217182 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 10:53:28.221284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.395706 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.395758 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.395771 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.395789 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.395799 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:49Z","lastTransitionTime":"2025-12-03T10:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.415065 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9df49311-9ea2-411a-9627-695fbd0b6248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc2a0487fcc32cb6cd148976f14df4e7cf8c6e8cb06d7cc8365740484c30b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad98945b1e87deede05a7eda2f9adaddb0b884850dad8a9d6a8d1a5e5df02d12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c0654e2a6bce50e493d1f0119b8ef84de2222f31a901130d30e2acdf8b6fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.430629 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.449883 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.465327 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bxgrk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85bcc5e9-f7cc-4293-ba77-2013229e14f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16dc61cb1ea3bd864dde2256225b9b06d65d2091be0c8ab36e303b927f42997d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j2hw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bxgrk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.479047 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2qbq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05365c24-b0af-4a09-b576-8245a5ea7512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86d1ec7bd6eb35e26dad9adc2ef368f7a4d3414e4db44baae5292c95d11b12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7ctrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2qbq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.493912 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"690986b4-18a2-46a1-9630-146ab8b3b313\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0ccc45a014b5221a5e22a35b29df5446825770844e7e9b3e8cc01228954ca18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0aa6c1451fb8c8270fcdd28b234cf83cf94b06fe79cd807cb3889b180a40d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0aa6c1451fb8c8270fcdd28b234cf83cf94b06fe79cd807cb3889b180a40d79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.499088 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.499137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.499148 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.499167 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.499179 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:49Z","lastTransitionTime":"2025-12-03T10:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.525596 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65bd3423-2e63-4afa-90a9-4db4c36fa9ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00447ddce8e8cd04c24e8c4c48311968ce8c53aaed2913aa32c5b3673b9dc658\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca64c7bcb4c04c7412d3d79d70ece2b27f16d0343fe0d0ec069771f734a67ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b934076ce1b273fa132a066e5641af99b6e0b62f002bc20f10f0bfa4f2c1f7e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://011be6cdd5ebe67b7b26f6d81922abed2cfedd5c2256a7338662869c1128ca35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88faa812e4c215b5d08c639d100ec61f8c5867bf2e88a2954e4156f53b3257d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a1973df1495d137d0f43b666da34eead1b891a539390ffc04d031acee490a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3a1973df1495d137d0f43b666da34eead1b891a539390ffc04d031acee490a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94d9e927be605b83bb7053c54055470bf0c6eabdc296f27263a145b2d0a78e28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94d9e927be605b83bb7053c54055470bf0c6eabdc296f27263a145b2d0a78e28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1cd85aa7d94ce212503e1459b05b6681863198dcd6c7440bf16dca1ed3c9d42b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd85aa7d94ce212503e1459b05b6681863198dcd6c7440bf16dca1ed3c9d42b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.544572 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2778e908-8884-48bf-8c56-ebacf93f4dce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce4fb7fbbf3f4436cce5d68f33c897459650ae5c1afeeca7fdaabbab6eb281bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://555f69d8ff1ed5d128178a8bed7a638a85747d11081d5d5dceae18eaefd880e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2881be88b53e3d833d350c0addf74399879b928301d5a4cb26ac2817dc88f7de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.565007 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ec72dc3024954ab529bb961916a6465401dbe763ac4c63ad40a850255d82151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.584374 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.602758 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.602834 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.602875 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.602968 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.602991 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:49Z","lastTransitionTime":"2025-12-03T10:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.603262 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32ebbbed-12e9-4c2f-9c8a-4e2693a7e65c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://187f52cbce70e03a88c6f4a3ab7e0acd0a9ecbaa82110330e7807711030638b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cee7dd58da7f3bdf90f073872b12dac06efc7fecbad98002942bcb104dc6041a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xngpt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:49Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.706133 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.706200 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.706220 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.706249 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.706270 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:49Z","lastTransitionTime":"2025-12-03T10:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.808420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.808474 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.808486 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.808516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.808528 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:49Z","lastTransitionTime":"2025-12-03T10:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.913067 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.913142 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.913166 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.913200 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:49 crc kubenswrapper[4756]: I1203 10:54:49.913225 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:49Z","lastTransitionTime":"2025-12-03T10:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.017132 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.017196 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.017216 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.017246 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.017268 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:50Z","lastTransitionTime":"2025-12-03T10:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.121295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.121354 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.121374 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.121400 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.121458 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:50Z","lastTransitionTime":"2025-12-03T10:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.225664 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.225738 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.225758 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.225785 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.225801 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:50Z","lastTransitionTime":"2025-12-03T10:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.328626 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.328691 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.328708 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.328734 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.328781 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:50Z","lastTransitionTime":"2025-12-03T10:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.433300 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.433427 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.433449 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.433481 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.433500 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:50Z","lastTransitionTime":"2025-12-03T10:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.537308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.537389 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.537401 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.537420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.537431 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:50Z","lastTransitionTime":"2025-12-03T10:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.641644 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.641701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.641711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.641728 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.641739 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:50Z","lastTransitionTime":"2025-12-03T10:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.744348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.744414 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.744429 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.744456 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.744470 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:50Z","lastTransitionTime":"2025-12-03T10:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.847743 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.847818 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.847847 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.847887 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.847916 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:50Z","lastTransitionTime":"2025-12-03T10:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.951064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.951112 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.951122 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.951139 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:50 crc kubenswrapper[4756]: I1203 10:54:50.951154 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:50Z","lastTransitionTime":"2025-12-03T10:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.054883 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.054985 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.055005 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.055038 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.055056 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:51Z","lastTransitionTime":"2025-12-03T10:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.158488 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.158552 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.158571 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.158599 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.158620 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:51Z","lastTransitionTime":"2025-12-03T10:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.234075 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.234168 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:51 crc kubenswrapper[4756]: E1203 10:54:51.234212 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:51 crc kubenswrapper[4756]: E1203 10:54:51.234234 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.234358 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.234456 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:51 crc kubenswrapper[4756]: E1203 10:54:51.234606 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:51 crc kubenswrapper[4756]: E1203 10:54:51.234693 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.260995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.261051 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.261068 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.261091 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.261110 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:51Z","lastTransitionTime":"2025-12-03T10:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.364167 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.364598 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.364711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.364801 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.364870 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:51Z","lastTransitionTime":"2025-12-03T10:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.468764 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.468860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.468887 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.468927 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.469014 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:51Z","lastTransitionTime":"2025-12-03T10:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.572492 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.572559 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.572569 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.572589 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.572609 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:51Z","lastTransitionTime":"2025-12-03T10:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.676813 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.676898 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.676920 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.676988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.677009 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:51Z","lastTransitionTime":"2025-12-03T10:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.780824 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.780915 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.780946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.781020 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.781036 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:51Z","lastTransitionTime":"2025-12-03T10:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.884708 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.884767 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.884780 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.884801 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.884818 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:51Z","lastTransitionTime":"2025-12-03T10:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.988280 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.988720 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.988978 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.989069 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:51 crc kubenswrapper[4756]: I1203 10:54:51.989170 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:51Z","lastTransitionTime":"2025-12-03T10:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.093297 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.094213 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.094360 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.094523 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.094676 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:52Z","lastTransitionTime":"2025-12-03T10:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.198234 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.199145 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.199176 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.199206 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.199232 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:52Z","lastTransitionTime":"2025-12-03T10:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.304687 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.305150 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.305275 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.305404 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.305492 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:52Z","lastTransitionTime":"2025-12-03T10:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.409152 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.409902 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.410274 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.410538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.410690 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:52Z","lastTransitionTime":"2025-12-03T10:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.514311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.514374 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.514393 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.514419 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.514437 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:52Z","lastTransitionTime":"2025-12-03T10:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.618205 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.618279 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.618304 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.618333 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.618349 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:52Z","lastTransitionTime":"2025-12-03T10:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.721624 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.721674 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.721687 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.721705 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.721725 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:52Z","lastTransitionTime":"2025-12-03T10:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.824653 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.824749 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.824761 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.824786 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.824798 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:52Z","lastTransitionTime":"2025-12-03T10:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.927808 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.927858 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.927872 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.927892 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:52 crc kubenswrapper[4756]: I1203 10:54:52.927906 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:52Z","lastTransitionTime":"2025-12-03T10:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.031847 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.031916 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.031935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.031986 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.032005 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:53Z","lastTransitionTime":"2025-12-03T10:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.135442 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.135508 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.135520 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.135542 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.135553 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:53Z","lastTransitionTime":"2025-12-03T10:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.233993 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.234102 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.234170 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:53 crc kubenswrapper[4756]: E1203 10:54:53.234199 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.234300 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:53 crc kubenswrapper[4756]: E1203 10:54:53.234554 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:53 crc kubenswrapper[4756]: E1203 10:54:53.234817 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:53 crc kubenswrapper[4756]: E1203 10:54:53.234931 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.238386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.238429 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.238443 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.238466 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.238480 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:53Z","lastTransitionTime":"2025-12-03T10:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.342885 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.343017 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.343043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.343082 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.343111 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:53Z","lastTransitionTime":"2025-12-03T10:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.446417 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.446472 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.446488 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.446513 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.446530 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:53Z","lastTransitionTime":"2025-12-03T10:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.550311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.550379 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.550389 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.550408 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.550420 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:53Z","lastTransitionTime":"2025-12-03T10:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.653034 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.653107 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.653137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.653166 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.653181 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:53Z","lastTransitionTime":"2025-12-03T10:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.756595 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.756678 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.756703 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.756739 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.756764 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:53Z","lastTransitionTime":"2025-12-03T10:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.861195 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.861354 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.861384 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.861413 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.861432 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:53Z","lastTransitionTime":"2025-12-03T10:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.965252 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.965340 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.965366 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.965396 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:53 crc kubenswrapper[4756]: I1203 10:54:53.965415 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:53Z","lastTransitionTime":"2025-12-03T10:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.068186 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.068235 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.068247 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.068266 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.068278 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:54Z","lastTransitionTime":"2025-12-03T10:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.172256 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.172313 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.172326 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.172351 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.172364 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:54Z","lastTransitionTime":"2025-12-03T10:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.185341 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.185423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.185449 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.185486 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.185518 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:54Z","lastTransitionTime":"2025-12-03T10:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:54 crc kubenswrapper[4756]: E1203 10:54:54.208028 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:54Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.212895 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.212982 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.213008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.213036 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.213057 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:54Z","lastTransitionTime":"2025-12-03T10:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:54 crc kubenswrapper[4756]: E1203 10:54:54.235707 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:54Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.241419 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.241502 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.241527 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.241560 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.241583 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:54Z","lastTransitionTime":"2025-12-03T10:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:54 crc kubenswrapper[4756]: E1203 10:54:54.262278 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:54Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.268657 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.268711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.268735 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.268770 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.268834 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:54Z","lastTransitionTime":"2025-12-03T10:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:54 crc kubenswrapper[4756]: E1203 10:54:54.288250 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:54Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.294367 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.294434 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.294458 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.294485 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.294503 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:54Z","lastTransitionTime":"2025-12-03T10:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:54 crc kubenswrapper[4756]: E1203 10:54:54.309516 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a916e5b8-6e5c-4097-b971-a8f4ba12cdc7\\\",\\\"systemUUID\\\":\\\"252ddd87-ab9d-46d8-a45d-0324a35cd261\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:54Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:54 crc kubenswrapper[4756]: E1203 10:54:54.309883 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.312069 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.312186 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.312210 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.312239 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.312256 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:54Z","lastTransitionTime":"2025-12-03T10:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.415248 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.415284 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.415310 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.415330 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.415343 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:54Z","lastTransitionTime":"2025-12-03T10:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.519205 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.519273 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.519292 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.519320 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.519339 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:54Z","lastTransitionTime":"2025-12-03T10:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.622659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.622746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.622765 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.622815 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.622831 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:54Z","lastTransitionTime":"2025-12-03T10:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.725856 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.725926 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.725974 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.726017 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.726039 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:54Z","lastTransitionTime":"2025-12-03T10:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.830176 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.830228 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.830245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.830271 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.830289 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:54Z","lastTransitionTime":"2025-12-03T10:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.934347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.934577 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.934670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.934709 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:54 crc kubenswrapper[4756]: I1203 10:54:54.934733 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:54Z","lastTransitionTime":"2025-12-03T10:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.037760 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.037831 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.037852 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.037880 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.037896 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:55Z","lastTransitionTime":"2025-12-03T10:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.141169 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.141230 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.141250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.141289 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.141308 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:55Z","lastTransitionTime":"2025-12-03T10:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.233510 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.233548 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:55 crc kubenswrapper[4756]: E1203 10:54:55.233785 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:55 crc kubenswrapper[4756]: E1203 10:54:55.233946 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.234367 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.234503 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:55 crc kubenswrapper[4756]: E1203 10:54:55.234695 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:55 crc kubenswrapper[4756]: E1203 10:54:55.234893 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.243570 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.243620 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.243638 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.243662 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.243679 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:55Z","lastTransitionTime":"2025-12-03T10:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.347156 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.347241 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.347270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.347305 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.347329 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:55Z","lastTransitionTime":"2025-12-03T10:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.451031 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.451105 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.451126 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.451158 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.451182 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:55Z","lastTransitionTime":"2025-12-03T10:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.555043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.555141 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.555169 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.555201 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.555224 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:55Z","lastTransitionTime":"2025-12-03T10:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.658584 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.658641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.658661 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.658694 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.658713 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:55Z","lastTransitionTime":"2025-12-03T10:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.762154 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.762221 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.762235 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.762253 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.762281 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:55Z","lastTransitionTime":"2025-12-03T10:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.864692 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.864774 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.864798 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.864839 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.864863 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:55Z","lastTransitionTime":"2025-12-03T10:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.967100 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.967144 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.967157 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.967178 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:55 crc kubenswrapper[4756]: I1203 10:54:55.967192 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:55Z","lastTransitionTime":"2025-12-03T10:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.070054 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.070201 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.070227 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.070265 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.070289 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:56Z","lastTransitionTime":"2025-12-03T10:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.174090 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.174165 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.174184 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.174217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.174235 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:56Z","lastTransitionTime":"2025-12-03T10:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.277590 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.277651 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.277666 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.277686 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.277704 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:56Z","lastTransitionTime":"2025-12-03T10:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.381559 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.381597 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.381607 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.381625 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.381637 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:56Z","lastTransitionTime":"2025-12-03T10:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.485526 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.485613 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.485640 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.485676 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.485696 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:56Z","lastTransitionTime":"2025-12-03T10:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.589178 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.589223 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.589235 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.589254 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.589269 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:56Z","lastTransitionTime":"2025-12-03T10:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.692776 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.692875 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.692907 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.692929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.692942 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:56Z","lastTransitionTime":"2025-12-03T10:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.796981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.797043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.797057 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.797077 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.797089 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:56Z","lastTransitionTime":"2025-12-03T10:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.900127 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.900184 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.900194 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.900217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:56 crc kubenswrapper[4756]: I1203 10:54:56.900228 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:56Z","lastTransitionTime":"2025-12-03T10:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.003421 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.003482 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.003502 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.003526 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.003541 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:57Z","lastTransitionTime":"2025-12-03T10:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.106096 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.106157 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.106170 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.106190 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.106203 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:57Z","lastTransitionTime":"2025-12-03T10:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.209724 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.209787 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.209800 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.209825 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.209843 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:57Z","lastTransitionTime":"2025-12-03T10:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.233200 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.233289 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.233302 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.233373 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:57 crc kubenswrapper[4756]: E1203 10:54:57.233383 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:57 crc kubenswrapper[4756]: E1203 10:54:57.233547 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:57 crc kubenswrapper[4756]: E1203 10:54:57.233621 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:57 crc kubenswrapper[4756]: E1203 10:54:57.234101 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.313075 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.313121 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.313133 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.313152 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.313162 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:57Z","lastTransitionTime":"2025-12-03T10:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.416832 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.416914 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.416935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.416989 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.417010 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:57Z","lastTransitionTime":"2025-12-03T10:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.519887 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.519930 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.519941 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.519975 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.519985 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:57Z","lastTransitionTime":"2025-12-03T10:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.623072 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.623130 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.623151 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.623177 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.623197 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:57Z","lastTransitionTime":"2025-12-03T10:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.726929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.727042 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.727060 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.727084 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.727098 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:57Z","lastTransitionTime":"2025-12-03T10:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.830268 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.830339 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.830358 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.830394 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.830425 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:57Z","lastTransitionTime":"2025-12-03T10:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.934037 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.934100 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.934111 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.934129 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:57 crc kubenswrapper[4756]: I1203 10:54:57.934142 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:57Z","lastTransitionTime":"2025-12-03T10:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.036932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.037045 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.037059 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.037088 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.037102 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:58Z","lastTransitionTime":"2025-12-03T10:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.140893 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.140989 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.141009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.141040 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.141061 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:58Z","lastTransitionTime":"2025-12-03T10:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.244122 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.244243 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.244278 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.244309 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.244332 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:58Z","lastTransitionTime":"2025-12-03T10:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.347567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.347666 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.347682 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.347703 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.347716 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:58Z","lastTransitionTime":"2025-12-03T10:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.451185 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.451234 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.451247 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.451271 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.451286 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:58Z","lastTransitionTime":"2025-12-03T10:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.554294 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.554365 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.554380 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.554404 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.554420 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:58Z","lastTransitionTime":"2025-12-03T10:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.658033 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.658136 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.658164 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.658203 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.658228 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:58Z","lastTransitionTime":"2025-12-03T10:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.761834 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.761899 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.761911 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.761933 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.761972 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:58Z","lastTransitionTime":"2025-12-03T10:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.865444 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.865501 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.865516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.865538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.865557 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:58Z","lastTransitionTime":"2025-12-03T10:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.968735 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.968790 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.968803 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.968823 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:58 crc kubenswrapper[4756]: I1203 10:54:58.968836 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:58Z","lastTransitionTime":"2025-12-03T10:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.072215 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.072275 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.072286 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.072308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.072320 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:59Z","lastTransitionTime":"2025-12-03T10:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.175462 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.175530 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.175544 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.175560 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.175573 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:59Z","lastTransitionTime":"2025-12-03T10:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.233152 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.233209 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.233178 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:54:59 crc kubenswrapper[4756]: E1203 10:54:59.233424 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.233470 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:54:59 crc kubenswrapper[4756]: E1203 10:54:59.233610 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:54:59 crc kubenswrapper[4756]: E1203 10:54:59.233853 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:54:59 crc kubenswrapper[4756]: E1203 10:54:59.233900 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.251151 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f97c171c4e6862e599efcd02cb00e6cc40e63aceb433dbb214c14329461e543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef3b8b98ea9c690c2f6995558e93d1395ca558bcb5c94eb35dcd46080d91be5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.273077 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.278308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.278355 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.278366 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.278386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.278398 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:59Z","lastTransitionTime":"2025-12-03T10:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.289114 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-27cgj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"088d1c61-980b-42bc-82e6-0215df050158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de32faaa0b97ead1d6c026f078084165133a738638f53708bef9589c3376ab33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9437577c4553de2c3fe931cace0c72fb4bc7209981a58a47b776fed16a21dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d45140856ab8b31e5df80e88b0c39b40eaf5237b6213a00e01220b2788dad479\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://461dcb8c76e330bb4f072e2255112007c496e20493d5068c426603a8b7f07b55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc9d2861790f105f9e85b6b84b3b9ed1e3831c60a6c4ed029b229c167a23bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a38211cbc6a85370f8c40fe55480a037b53f835504a123c51551625184810bf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://558f61139b27e7226dedc162cc8ac6ce3b281ed90b872fa584cc41c9b5db0d78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jvwg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-27cgj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.305791 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4xwtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0dad5dd-86f8-4a8a-aed6-dd07123c5058\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:16Z\\\",\\\"message\\\":\\\"2025-12-03T10:53:31+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021\\\\n2025-12-03T10:53:31+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4c1e4ab-134e-4e38-a5a1-62f9ab8ec021 to /host/opt/cni/bin/\\\\n2025-12-03T10:53:31Z [verbose] multus-daemon started\\\\n2025-12-03T10:53:31Z [verbose] Readiness Indicator file check\\\\n2025-12-03T10:54:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxd9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4xwtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.327129 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b16dcb4b-a5dd-4081-a569-7f5a024f673b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T10:54:31Z\\\",\\\"message\\\":\\\"n-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 10:54:31.474687 6782 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 10:54:31.474709 6782 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 10:54:31.474737 6782 factory.go:656] Stopping watch factory\\\\nI1203 10:54:31.474762 6782 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 10:54:31.474772 6782 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 10:54:31.474848 6782 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 10:54:31.474922 6782 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475020 6782 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475089 6782 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475302 6782 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 10:54:31.475415 6782 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T10:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tktq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zqms7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.338776 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd88c3db-a819-4fb9-a952-30dc1b67c375\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k64p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qvt7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.351137 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc39f5-d4a1-4174-8d5f-56126872107f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a464f1d8c8a9c4c7b60466203bb3162822d0901049c357c81f6aff4ed55054\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkz9f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pppvw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.363822 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9df49311-9ea2-411a-9627-695fbd0b6248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fc2a0487fcc32cb6cd148976f14df4e7cf8c6e8cb06d7cc8365740484c30b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad98945b1e87deede05a7eda2f9adaddb0b884850dad8a9d6a8d1a5e5df02d12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c0654e2a6bce50e493d1f0119b8ef84de2222f31a901130d30e2acdf8b6fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://896f874153261479867e3515c9f11726f98556941cfb6342b94b386e95a12ad3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T10:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T10:53:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T10:53:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.375649 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a5451b9cd7eef8f6384ff16c80aea52cdfc3e76e9c74f45f39b0f74a163a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T10:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.380447 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.380516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.380533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.380553 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.380568 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:59Z","lastTransitionTime":"2025-12-03T10:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.389130 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T10:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T10:54:59Z is after 2025-08-24T17:21:41Z" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.414625 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bxgrk" podStartSLOduration=90.414602382 podStartE2EDuration="1m30.414602382s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:54:59.40973283 +0000 UTC m=+110.439734074" watchObservedRunningTime="2025-12-03 10:54:59.414602382 +0000 UTC m=+110.444603626" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.421383 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-2qbq7" podStartSLOduration=90.421356674 podStartE2EDuration="1m30.421356674s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:54:59.421186059 +0000 UTC m=+110.451187303" watchObservedRunningTime="2025-12-03 10:54:59.421356674 +0000 UTC m=+110.451357918" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.442160 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=91.442137816 podStartE2EDuration="1m31.442137816s" podCreationTimestamp="2025-12-03 10:53:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:54:59.442002521 +0000 UTC m=+110.472003765" watchObservedRunningTime="2025-12-03 10:54:59.442137816 +0000 UTC m=+110.472139060" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.476805 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=22.476784023 podStartE2EDuration="22.476784023s" podCreationTimestamp="2025-12-03 10:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:54:59.474603975 +0000 UTC m=+110.504605219" watchObservedRunningTime="2025-12-03 10:54:59.476784023 +0000 UTC m=+110.506785267" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.483976 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.484045 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.484060 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.484078 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.484090 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:59Z","lastTransitionTime":"2025-12-03T10:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.505612 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=91.505584317 podStartE2EDuration="1m31.505584317s" podCreationTimestamp="2025-12-03 10:53:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:54:59.489832872 +0000 UTC m=+110.519834116" watchObservedRunningTime="2025-12-03 10:54:59.505584317 +0000 UTC m=+110.535585561" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.530094 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xngpt" podStartSLOduration=90.530071535 podStartE2EDuration="1m30.530071535s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:54:59.529773105 +0000 UTC m=+110.559774349" watchObservedRunningTime="2025-12-03 10:54:59.530071535 +0000 UTC m=+110.560072779" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.540637 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=37.540618005 podStartE2EDuration="37.540618005s" podCreationTimestamp="2025-12-03 10:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:54:59.540402969 +0000 UTC m=+110.570404213" watchObservedRunningTime="2025-12-03 10:54:59.540618005 +0000 UTC m=+110.570619249" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.587217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.587258 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.587268 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.587282 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.587294 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:59Z","lastTransitionTime":"2025-12-03T10:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.689449 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.689503 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.689517 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.689541 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.689554 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:59Z","lastTransitionTime":"2025-12-03T10:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.791897 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.791935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.791992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.792009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.792019 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:59Z","lastTransitionTime":"2025-12-03T10:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.894272 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.894315 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.894328 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.894343 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.894353 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:59Z","lastTransitionTime":"2025-12-03T10:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.996779 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.996836 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.996848 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.996869 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:54:59 crc kubenswrapper[4756]: I1203 10:54:59.996884 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:54:59Z","lastTransitionTime":"2025-12-03T10:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.099509 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.099556 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.099564 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.099577 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.099588 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:00Z","lastTransitionTime":"2025-12-03T10:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.202125 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.202188 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.202205 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.202229 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.202247 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:00Z","lastTransitionTime":"2025-12-03T10:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.305240 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.305310 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.305328 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.305354 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.305375 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:00Z","lastTransitionTime":"2025-12-03T10:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.408231 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.408296 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.408312 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.408337 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.408354 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:00Z","lastTransitionTime":"2025-12-03T10:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.511052 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.511091 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.511100 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.511114 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.511122 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:00Z","lastTransitionTime":"2025-12-03T10:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.613376 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.613438 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.613455 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.613477 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.613496 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:00Z","lastTransitionTime":"2025-12-03T10:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.716418 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.716727 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.716738 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.716755 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.716765 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:00Z","lastTransitionTime":"2025-12-03T10:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.818615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.818655 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.818666 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.818682 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.818693 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:00Z","lastTransitionTime":"2025-12-03T10:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.920898 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.920937 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.920966 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.921000 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:00 crc kubenswrapper[4756]: I1203 10:55:00.921013 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:00Z","lastTransitionTime":"2025-12-03T10:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.023913 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.023962 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.023971 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.023986 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.023995 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:01Z","lastTransitionTime":"2025-12-03T10:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.126631 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.126688 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.126704 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.126723 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.126737 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:01Z","lastTransitionTime":"2025-12-03T10:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.228925 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.229045 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.229074 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.229103 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.229125 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:01Z","lastTransitionTime":"2025-12-03T10:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.235353 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.235357 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.235399 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:01 crc kubenswrapper[4756]: E1203 10:55:01.235557 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.235614 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:55:01 crc kubenswrapper[4756]: E1203 10:55:01.235763 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:55:01 crc kubenswrapper[4756]: E1203 10:55:01.235835 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:55:01 crc kubenswrapper[4756]: E1203 10:55:01.235923 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.236640 4756 scope.go:117] "RemoveContainer" containerID="246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824" Dec 03 10:55:01 crc kubenswrapper[4756]: E1203 10:55:01.236872 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zqms7_openshift-ovn-kubernetes(b16dcb4b-a5dd-4081-a569-7f5a024f673b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.331464 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.331516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.331528 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.331548 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.331560 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:01Z","lastTransitionTime":"2025-12-03T10:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.434302 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.434443 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.434467 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.434512 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.434541 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:01Z","lastTransitionTime":"2025-12-03T10:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.537355 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.537436 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.537455 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.537484 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.537506 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:01Z","lastTransitionTime":"2025-12-03T10:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.639742 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.639792 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.639804 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.639820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.639831 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:01Z","lastTransitionTime":"2025-12-03T10:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.742057 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.742097 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.742107 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.742124 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.742133 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:01Z","lastTransitionTime":"2025-12-03T10:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.844394 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.844429 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.844440 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.844454 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.844464 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:01Z","lastTransitionTime":"2025-12-03T10:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.947134 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.947174 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.947185 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.947201 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:01 crc kubenswrapper[4756]: I1203 10:55:01.947214 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:01Z","lastTransitionTime":"2025-12-03T10:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.050473 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.050542 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.050564 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.050594 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.050620 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:02Z","lastTransitionTime":"2025-12-03T10:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.153437 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.153516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.153538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.153565 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.153587 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:02Z","lastTransitionTime":"2025-12-03T10:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.256533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.256602 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.256621 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.256759 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.256789 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:02Z","lastTransitionTime":"2025-12-03T10:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.360087 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.360159 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.360177 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.360207 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.360225 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:02Z","lastTransitionTime":"2025-12-03T10:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.463937 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.464012 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.464028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.464050 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.464065 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:02Z","lastTransitionTime":"2025-12-03T10:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.567553 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.567613 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.567631 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.567653 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.567669 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:02Z","lastTransitionTime":"2025-12-03T10:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.671309 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.671367 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.671382 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.671403 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.671416 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:02Z","lastTransitionTime":"2025-12-03T10:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.774044 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.774095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.774106 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.774123 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.774136 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:02Z","lastTransitionTime":"2025-12-03T10:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.876998 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.877089 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.877116 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.877152 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.877177 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:02Z","lastTransitionTime":"2025-12-03T10:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.981295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.981351 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.981364 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.981382 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:02 crc kubenswrapper[4756]: I1203 10:55:02.981393 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:02Z","lastTransitionTime":"2025-12-03T10:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.084737 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.084780 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.084790 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.084811 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.084826 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:03Z","lastTransitionTime":"2025-12-03T10:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.188591 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.188635 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.188643 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.188657 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.188670 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:03Z","lastTransitionTime":"2025-12-03T10:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.233279 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.233353 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.233320 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.233320 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:55:03 crc kubenswrapper[4756]: E1203 10:55:03.233506 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:55:03 crc kubenswrapper[4756]: E1203 10:55:03.233615 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:55:03 crc kubenswrapper[4756]: E1203 10:55:03.233862 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:55:03 crc kubenswrapper[4756]: E1203 10:55:03.234211 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.291799 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.291849 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.291887 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.291908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.291920 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:03Z","lastTransitionTime":"2025-12-03T10:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.394765 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.394834 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.394848 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.394870 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.394887 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:03Z","lastTransitionTime":"2025-12-03T10:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.497180 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.497220 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.497229 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.497243 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.497253 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:03Z","lastTransitionTime":"2025-12-03T10:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.599436 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.599498 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.599509 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.599527 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.599543 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:03Z","lastTransitionTime":"2025-12-03T10:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.702559 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.702668 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.702694 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.702730 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.702752 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:03Z","lastTransitionTime":"2025-12-03T10:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.806094 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.806150 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.806161 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.806181 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.806195 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:03Z","lastTransitionTime":"2025-12-03T10:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.868992 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xwtn_d0dad5dd-86f8-4a8a-aed6-dd07123c5058/kube-multus/1.log" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.869562 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xwtn_d0dad5dd-86f8-4a8a-aed6-dd07123c5058/kube-multus/0.log" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.869604 4756 generic.go:334] "Generic (PLEG): container finished" podID="d0dad5dd-86f8-4a8a-aed6-dd07123c5058" containerID="315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6" exitCode=1 Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.869654 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xwtn" event={"ID":"d0dad5dd-86f8-4a8a-aed6-dd07123c5058","Type":"ContainerDied","Data":"315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6"} Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.869779 4756 scope.go:117] "RemoveContainer" containerID="c49fe999504df5cea030ff267c8f4e9aed490e12b5daba5ac65a96d0f425755c" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.870245 4756 scope.go:117] "RemoveContainer" containerID="315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6" Dec 03 10:55:03 crc kubenswrapper[4756]: E1203 10:55:03.870491 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-4xwtn_openshift-multus(d0dad5dd-86f8-4a8a-aed6-dd07123c5058)\"" pod="openshift-multus/multus-4xwtn" podUID="d0dad5dd-86f8-4a8a-aed6-dd07123c5058" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.908892 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.908933 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.908944 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.909003 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.909016 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:03Z","lastTransitionTime":"2025-12-03T10:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:03 crc kubenswrapper[4756]: I1203 10:55:03.970168 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podStartSLOduration=94.9701484 podStartE2EDuration="1m34.9701484s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:03.969429257 +0000 UTC m=+114.999430521" watchObservedRunningTime="2025-12-03 10:55:03.9701484 +0000 UTC m=+115.000149644" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.005361 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-27cgj" podStartSLOduration=95.005339613 podStartE2EDuration="1m35.005339613s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:03.992348426 +0000 UTC m=+115.022349670" watchObservedRunningTime="2025-12-03 10:55:04.005339613 +0000 UTC m=+115.035340857" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.011490 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.011531 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.011548 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.011566 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.011580 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:04Z","lastTransitionTime":"2025-12-03T10:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.037283 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=61.037258185 podStartE2EDuration="1m1.037258185s" podCreationTimestamp="2025-12-03 10:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:04.036376447 +0000 UTC m=+115.066377711" watchObservedRunningTime="2025-12-03 10:55:04.037258185 +0000 UTC m=+115.067259429" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.114202 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.114254 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.114264 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.114278 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.114289 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:04Z","lastTransitionTime":"2025-12-03T10:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.217596 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.217653 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.217663 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.217678 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.217688 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:04Z","lastTransitionTime":"2025-12-03T10:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.320350 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.320396 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.320405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.320420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.320433 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:04Z","lastTransitionTime":"2025-12-03T10:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.423626 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.423662 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.423672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.423686 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.423697 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:04Z","lastTransitionTime":"2025-12-03T10:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.527516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.527585 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.527604 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.527632 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.527646 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:04Z","lastTransitionTime":"2025-12-03T10:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.631593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.631639 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.631653 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.631677 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.631692 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:04Z","lastTransitionTime":"2025-12-03T10:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.687073 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.687156 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.687182 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.687249 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.687309 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T10:55:04Z","lastTransitionTime":"2025-12-03T10:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.769255 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl"] Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.771103 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.778680 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.779058 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.782362 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.782439 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.877646 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xwtn_d0dad5dd-86f8-4a8a-aed6-dd07123c5058/kube-multus/1.log" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.882537 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/30b047d2-a544-4213-9f95-06e004b64d82-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.882605 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/30b047d2-a544-4213-9f95-06e004b64d82-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.882743 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30b047d2-a544-4213-9f95-06e004b64d82-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.882807 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/30b047d2-a544-4213-9f95-06e004b64d82-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.882851 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30b047d2-a544-4213-9f95-06e004b64d82-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.983515 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30b047d2-a544-4213-9f95-06e004b64d82-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.983645 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/30b047d2-a544-4213-9f95-06e004b64d82-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.983701 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/30b047d2-a544-4213-9f95-06e004b64d82-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.983779 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30b047d2-a544-4213-9f95-06e004b64d82-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.983868 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/30b047d2-a544-4213-9f95-06e004b64d82-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.983985 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/30b047d2-a544-4213-9f95-06e004b64d82-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.984012 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/30b047d2-a544-4213-9f95-06e004b64d82-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.985790 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/30b047d2-a544-4213-9f95-06e004b64d82-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:04 crc kubenswrapper[4756]: I1203 10:55:04.997591 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30b047d2-a544-4213-9f95-06e004b64d82-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:05 crc kubenswrapper[4756]: I1203 10:55:05.016270 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30b047d2-a544-4213-9f95-06e004b64d82-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hlvjl\" (UID: \"30b047d2-a544-4213-9f95-06e004b64d82\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:05 crc kubenswrapper[4756]: I1203 10:55:05.099886 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" Dec 03 10:55:05 crc kubenswrapper[4756]: I1203 10:55:05.233412 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:05 crc kubenswrapper[4756]: I1203 10:55:05.233447 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:55:05 crc kubenswrapper[4756]: I1203 10:55:05.233512 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:05 crc kubenswrapper[4756]: I1203 10:55:05.233615 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:55:05 crc kubenswrapper[4756]: E1203 10:55:05.233822 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:55:05 crc kubenswrapper[4756]: E1203 10:55:05.234037 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:55:05 crc kubenswrapper[4756]: E1203 10:55:05.234195 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:55:05 crc kubenswrapper[4756]: E1203 10:55:05.234597 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:55:05 crc kubenswrapper[4756]: I1203 10:55:05.882371 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" event={"ID":"30b047d2-a544-4213-9f95-06e004b64d82","Type":"ContainerStarted","Data":"cc295c47c60ec748b73808e2b94a234b12f3e9b39d6b1a078fb1f8ac59bfbfd2"} Dec 03 10:55:05 crc kubenswrapper[4756]: I1203 10:55:05.882456 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" event={"ID":"30b047d2-a544-4213-9f95-06e004b64d82","Type":"ContainerStarted","Data":"84f0d2c76afe7e80aa093e3b95d45ab6021ccee5d8cec5947fd6771dd49b6351"} Dec 03 10:55:05 crc kubenswrapper[4756]: I1203 10:55:05.905732 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlvjl" podStartSLOduration=96.905693661 podStartE2EDuration="1m36.905693661s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:05.903614466 +0000 UTC m=+116.933615710" watchObservedRunningTime="2025-12-03 10:55:05.905693661 +0000 UTC m=+116.935694935" Dec 03 10:55:07 crc kubenswrapper[4756]: I1203 10:55:07.233434 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:07 crc kubenswrapper[4756]: I1203 10:55:07.233625 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:07 crc kubenswrapper[4756]: I1203 10:55:07.233920 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:55:07 crc kubenswrapper[4756]: E1203 10:55:07.233902 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:55:07 crc kubenswrapper[4756]: I1203 10:55:07.233999 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:55:07 crc kubenswrapper[4756]: E1203 10:55:07.234138 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:55:07 crc kubenswrapper[4756]: E1203 10:55:07.234185 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:55:07 crc kubenswrapper[4756]: E1203 10:55:07.234578 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:55:09 crc kubenswrapper[4756]: I1203 10:55:09.233256 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:55:09 crc kubenswrapper[4756]: I1203 10:55:09.233300 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:09 crc kubenswrapper[4756]: I1203 10:55:09.233251 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:09 crc kubenswrapper[4756]: E1203 10:55:09.233980 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:55:09 crc kubenswrapper[4756]: I1203 10:55:09.234003 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:55:09 crc kubenswrapper[4756]: E1203 10:55:09.234139 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:55:09 crc kubenswrapper[4756]: E1203 10:55:09.234223 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:55:09 crc kubenswrapper[4756]: E1203 10:55:09.234272 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:55:09 crc kubenswrapper[4756]: E1203 10:55:09.274686 4756 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 03 10:55:09 crc kubenswrapper[4756]: E1203 10:55:09.402888 4756 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 10:55:11 crc kubenswrapper[4756]: I1203 10:55:11.233033 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:55:11 crc kubenswrapper[4756]: I1203 10:55:11.233181 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:11 crc kubenswrapper[4756]: I1203 10:55:11.233292 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:55:11 crc kubenswrapper[4756]: I1203 10:55:11.233351 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:11 crc kubenswrapper[4756]: E1203 10:55:11.233686 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:55:11 crc kubenswrapper[4756]: E1203 10:55:11.234350 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:55:11 crc kubenswrapper[4756]: E1203 10:55:11.234513 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:55:11 crc kubenswrapper[4756]: E1203 10:55:11.234114 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:55:12 crc kubenswrapper[4756]: I1203 10:55:12.234874 4756 scope.go:117] "RemoveContainer" containerID="246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824" Dec 03 10:55:12 crc kubenswrapper[4756]: I1203 10:55:12.909233 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/3.log" Dec 03 10:55:12 crc kubenswrapper[4756]: I1203 10:55:12.912249 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerStarted","Data":"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344"} Dec 03 10:55:12 crc kubenswrapper[4756]: I1203 10:55:12.912812 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:55:13 crc kubenswrapper[4756]: I1203 10:55:13.083123 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podStartSLOduration=104.083101114 podStartE2EDuration="1m44.083101114s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:12.941271244 +0000 UTC m=+123.971272518" watchObservedRunningTime="2025-12-03 10:55:13.083101114 +0000 UTC m=+124.113102358" Dec 03 10:55:13 crc kubenswrapper[4756]: I1203 10:55:13.084017 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qvt7n"] Dec 03 10:55:13 crc kubenswrapper[4756]: I1203 10:55:13.084142 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:55:13 crc kubenswrapper[4756]: E1203 10:55:13.084241 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:55:13 crc kubenswrapper[4756]: I1203 10:55:13.233701 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:13 crc kubenswrapper[4756]: E1203 10:55:13.233856 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:55:13 crc kubenswrapper[4756]: I1203 10:55:13.234141 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:55:13 crc kubenswrapper[4756]: E1203 10:55:13.234199 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:55:13 crc kubenswrapper[4756]: I1203 10:55:13.234335 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:13 crc kubenswrapper[4756]: E1203 10:55:13.234392 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:55:14 crc kubenswrapper[4756]: E1203 10:55:14.404872 4756 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 10:55:15 crc kubenswrapper[4756]: I1203 10:55:15.233394 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:55:15 crc kubenswrapper[4756]: I1203 10:55:15.233488 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:55:15 crc kubenswrapper[4756]: I1203 10:55:15.233525 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:15 crc kubenswrapper[4756]: I1203 10:55:15.233654 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:15 crc kubenswrapper[4756]: E1203 10:55:15.233653 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:55:15 crc kubenswrapper[4756]: E1203 10:55:15.233800 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:55:15 crc kubenswrapper[4756]: E1203 10:55:15.234343 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:55:15 crc kubenswrapper[4756]: E1203 10:55:15.234430 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:55:15 crc kubenswrapper[4756]: I1203 10:55:15.234867 4756 scope.go:117] "RemoveContainer" containerID="315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6" Dec 03 10:55:15 crc kubenswrapper[4756]: I1203 10:55:15.927215 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xwtn_d0dad5dd-86f8-4a8a-aed6-dd07123c5058/kube-multus/1.log" Dec 03 10:55:15 crc kubenswrapper[4756]: I1203 10:55:15.927292 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xwtn" event={"ID":"d0dad5dd-86f8-4a8a-aed6-dd07123c5058","Type":"ContainerStarted","Data":"0560b3efc6adc446014a14846aa8ab9b49f6c721761c97590e339b4729018ca1"} Dec 03 10:55:15 crc kubenswrapper[4756]: I1203 10:55:15.943732 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4xwtn" podStartSLOduration=106.943710118 podStartE2EDuration="1m46.943710118s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:15.942678226 +0000 UTC m=+126.972679500" watchObservedRunningTime="2025-12-03 10:55:15.943710118 +0000 UTC m=+126.973711362" Dec 03 10:55:17 crc kubenswrapper[4756]: I1203 10:55:17.233848 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:17 crc kubenswrapper[4756]: I1203 10:55:17.233994 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:55:17 crc kubenswrapper[4756]: I1203 10:55:17.234143 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:55:17 crc kubenswrapper[4756]: E1203 10:55:17.234153 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:55:17 crc kubenswrapper[4756]: I1203 10:55:17.234206 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:17 crc kubenswrapper[4756]: E1203 10:55:17.234874 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:55:17 crc kubenswrapper[4756]: E1203 10:55:17.235308 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:55:17 crc kubenswrapper[4756]: E1203 10:55:17.235567 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:55:19 crc kubenswrapper[4756]: I1203 10:55:19.233938 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:19 crc kubenswrapper[4756]: I1203 10:55:19.234044 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:55:19 crc kubenswrapper[4756]: I1203 10:55:19.233938 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:55:19 crc kubenswrapper[4756]: I1203 10:55:19.234014 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:19 crc kubenswrapper[4756]: E1203 10:55:19.236553 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 10:55:19 crc kubenswrapper[4756]: E1203 10:55:19.236722 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qvt7n" podUID="cd88c3db-a819-4fb9-a952-30dc1b67c375" Dec 03 10:55:19 crc kubenswrapper[4756]: E1203 10:55:19.236801 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 10:55:19 crc kubenswrapper[4756]: E1203 10:55:19.236523 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 10:55:21 crc kubenswrapper[4756]: I1203 10:55:21.233831 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:21 crc kubenswrapper[4756]: I1203 10:55:21.233997 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:55:21 crc kubenswrapper[4756]: I1203 10:55:21.234049 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:21 crc kubenswrapper[4756]: I1203 10:55:21.234112 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:55:21 crc kubenswrapper[4756]: I1203 10:55:21.238761 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 10:55:21 crc kubenswrapper[4756]: I1203 10:55:21.239223 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 03 10:55:21 crc kubenswrapper[4756]: I1203 10:55:21.239579 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 10:55:21 crc kubenswrapper[4756]: I1203 10:55:21.239728 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 10:55:21 crc kubenswrapper[4756]: I1203 10:55:21.242068 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 10:55:21 crc kubenswrapper[4756]: I1203 10:55:21.242986 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.122653 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.190129 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gjjx2"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.191277 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.195571 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.196171 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.196571 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.196849 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.197349 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.200038 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.203389 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sfq5w"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.204371 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.205069 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xndxw"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.205857 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.206443 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.206521 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.218818 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.226635 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dgdb"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.228587 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.230181 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.235230 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.235357 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.238282 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.238509 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.238833 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.239009 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.239287 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.239479 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.239721 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.240161 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.240194 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.265442 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.266080 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.266374 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.266602 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.266844 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.267172 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.267675 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.268708 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.269086 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.269257 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.279930 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.280335 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.280982 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.281784 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.282074 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.282163 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.282320 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.282476 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.282600 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.282875 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.282977 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.283145 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.282525 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-nr6nx"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.291179 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.305360 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.305854 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.306235 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.305891 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.306651 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.306762 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.307113 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.308319 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.308529 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nr6nx" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.325678 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.326826 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-hf4d2"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.327328 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.329176 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5sxx7"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.329767 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.330253 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.330723 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k5dmg"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.331225 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nws44"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.331703 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.331854 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.332030 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.332129 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rdwph"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.332235 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.332307 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.332425 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.332480 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.332796 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.333092 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.333098 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.333148 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.333213 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.336731 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.355632 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.356073 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.356476 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rdwph" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357505 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357583 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwbfg\" (UniqueName: \"kubernetes.io/projected/f8e68876-7e42-40f7-acc3-4cb527be5e06-kube-api-access-pwbfg\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357639 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-serving-cert\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357666 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4fa96d4-7e66-41b2-8073-2fc131612225-config\") pod \"machine-approver-56656f9798-bvv8r\" (UID: \"d4fa96d4-7e66-41b2-8073-2fc131612225\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357688 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1541db70-51f2-4236-854e-6ec0f8fa3010-client-ca\") pod \"route-controller-manager-6576b87f9c-zpxxv\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357712 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357740 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357758 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-etcd-client\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357776 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpv2l\" (UniqueName: \"kubernetes.io/projected/35183c23-2ddd-4984-8ba9-d86765b138ce-kube-api-access-lpv2l\") pod \"machine-api-operator-5694c8668f-gjjx2\" (UID: \"35183c23-2ddd-4984-8ba9-d86765b138ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357799 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357815 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d4fa96d4-7e66-41b2-8073-2fc131612225-machine-approver-tls\") pod \"machine-approver-56656f9798-bvv8r\" (UID: \"d4fa96d4-7e66-41b2-8073-2fc131612225\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357834 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/35183c23-2ddd-4984-8ba9-d86765b138ce-images\") pod \"machine-api-operator-5694c8668f-gjjx2\" (UID: \"35183c23-2ddd-4984-8ba9-d86765b138ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357873 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq8k8\" (UniqueName: \"kubernetes.io/projected/d4fa96d4-7e66-41b2-8073-2fc131612225-kube-api-access-sq8k8\") pod \"machine-approver-56656f9798-bvv8r\" (UID: \"d4fa96d4-7e66-41b2-8073-2fc131612225\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357905 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd1add93-d083-4ee9-b1d7-306db9621f6f-trusted-ca\") pod \"console-operator-58897d9998-sfq5w\" (UID: \"fd1add93-d083-4ee9-b1d7-306db9621f6f\") " pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357921 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w57b6\" (UniqueName: \"kubernetes.io/projected/1541db70-51f2-4236-854e-6ec0f8fa3010-kube-api-access-w57b6\") pod \"route-controller-manager-6576b87f9c-zpxxv\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357969 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-etcd-serving-ca\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.357988 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-image-import-ca\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358026 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358045 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-audit-dir\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358067 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358085 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358101 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1541db70-51f2-4236-854e-6ec0f8fa3010-config\") pod \"route-controller-manager-6576b87f9c-zpxxv\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358122 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd1add93-d083-4ee9-b1d7-306db9621f6f-config\") pod \"console-operator-58897d9998-sfq5w\" (UID: \"fd1add93-d083-4ee9-b1d7-306db9621f6f\") " pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358127 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358140 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1541db70-51f2-4236-854e-6ec0f8fa3010-serving-cert\") pod \"route-controller-manager-6576b87f9c-zpxxv\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358166 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35183c23-2ddd-4984-8ba9-d86765b138ce-config\") pod \"machine-api-operator-5694c8668f-gjjx2\" (UID: \"35183c23-2ddd-4984-8ba9-d86765b138ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358197 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358217 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358237 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-node-pullsecrets\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358253 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-audit\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358276 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358302 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-audit-policies\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358319 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358343 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358366 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhhsk\" (UniqueName: \"kubernetes.io/projected/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-kube-api-access-xhhsk\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358389 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd1add93-d083-4ee9-b1d7-306db9621f6f-serving-cert\") pod \"console-operator-58897d9998-sfq5w\" (UID: \"fd1add93-d083-4ee9-b1d7-306db9621f6f\") " pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358408 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f8e68876-7e42-40f7-acc3-4cb527be5e06-audit-dir\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358424 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d4fa96d4-7e66-41b2-8073-2fc131612225-auth-proxy-config\") pod \"machine-approver-56656f9798-bvv8r\" (UID: \"d4fa96d4-7e66-41b2-8073-2fc131612225\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358442 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/35183c23-2ddd-4984-8ba9-d86765b138ce-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gjjx2\" (UID: \"35183c23-2ddd-4984-8ba9-d86765b138ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358473 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lks5b\" (UniqueName: \"kubernetes.io/projected/fd1add93-d083-4ee9-b1d7-306db9621f6f-kube-api-access-lks5b\") pod \"console-operator-58897d9998-sfq5w\" (UID: \"fd1add93-d083-4ee9-b1d7-306db9621f6f\") " pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358505 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358368 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358558 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358599 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-encryption-config\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358661 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-config\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358702 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358737 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358827 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358846 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358978 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.359034 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t4rfh"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.358744 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.359414 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.359439 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.366575 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xkdl4"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.367285 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.367297 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.367409 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.367577 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.367669 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.367725 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.367768 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.367774 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.367846 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.367867 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.367910 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.367965 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.367992 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368040 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xkdl4" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368058 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368129 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368196 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368263 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368333 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368135 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368502 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368569 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368600 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368682 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368704 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368790 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368805 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.368998 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.369007 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.369048 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.369112 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.369119 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tqdqx"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.369131 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.369219 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.369309 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.369381 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.369630 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tqdqx" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.369646 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.369814 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.369841 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.369901 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.369914 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.370007 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.370070 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.370120 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.370352 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.371548 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mdbh6"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.371870 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.373288 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.373831 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.373883 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.374311 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.375890 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpxd4"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.376268 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpxd4" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.376786 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.377239 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.378310 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.378931 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.379400 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7rfwn"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.379719 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.380828 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.381093 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.381198 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.381462 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.381513 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.381666 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-msbqs"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.382398 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-msbqs" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.382493 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.383378 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.383905 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.390458 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-827dv"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.394853 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.395150 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.395475 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kn6rz"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.395760 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.396238 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kn6rz" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.396243 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.396864 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.399192 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.399998 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.400743 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.402396 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sfq5w"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.404032 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nr6nx"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.405501 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.406519 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hf4d2"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.409201 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xndxw"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.409253 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.410165 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rdwph"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.411740 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-4fmqt"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.412720 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4fmqt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.414603 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dgdb"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.415113 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xkdl4"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.416358 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.418569 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.425529 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.428788 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.428820 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpxd4"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.447046 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.447146 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k5dmg"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.448445 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.450035 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t4rfh"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.453124 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.453436 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.453477 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.455043 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7rfwn"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.459308 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.459984 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.460233 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5sxx7"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.469476 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-encryption-config\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.469946 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-config\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.470186 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwbfg\" (UniqueName: \"kubernetes.io/projected/f8e68876-7e42-40f7-acc3-4cb527be5e06-kube-api-access-pwbfg\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.470363 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-serving-cert\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.470506 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4fa96d4-7e66-41b2-8073-2fc131612225-config\") pod \"machine-approver-56656f9798-bvv8r\" (UID: \"d4fa96d4-7e66-41b2-8073-2fc131612225\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.470648 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1541db70-51f2-4236-854e-6ec0f8fa3010-client-ca\") pod \"route-controller-manager-6576b87f9c-zpxxv\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.470791 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.472145 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tqkj\" (UniqueName: \"kubernetes.io/projected/13717635-b3e8-4e34-b622-a46ef9eee317-kube-api-access-5tqkj\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.472321 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13717635-b3e8-4e34-b622-a46ef9eee317-serving-cert\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.472465 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.472605 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-etcd-client\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.472741 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpv2l\" (UniqueName: \"kubernetes.io/projected/35183c23-2ddd-4984-8ba9-d86765b138ce-kube-api-access-lpv2l\") pod \"machine-api-operator-5694c8668f-gjjx2\" (UID: \"35183c23-2ddd-4984-8ba9-d86765b138ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.472898 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.474033 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d4fa96d4-7e66-41b2-8073-2fc131612225-machine-approver-tls\") pod \"machine-approver-56656f9798-bvv8r\" (UID: \"d4fa96d4-7e66-41b2-8073-2fc131612225\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.474217 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/35183c23-2ddd-4984-8ba9-d86765b138ce-images\") pod \"machine-api-operator-5694c8668f-gjjx2\" (UID: \"35183c23-2ddd-4984-8ba9-d86765b138ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.474347 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-etcd-service-ca\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.474517 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq8k8\" (UniqueName: \"kubernetes.io/projected/d4fa96d4-7e66-41b2-8073-2fc131612225-kube-api-access-sq8k8\") pod \"machine-approver-56656f9798-bvv8r\" (UID: \"d4fa96d4-7e66-41b2-8073-2fc131612225\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.474647 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13717635-b3e8-4e34-b622-a46ef9eee317-service-ca-bundle\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.474802 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-serving-cert\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.474936 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd1add93-d083-4ee9-b1d7-306db9621f6f-trusted-ca\") pod \"console-operator-58897d9998-sfq5w\" (UID: \"fd1add93-d083-4ee9-b1d7-306db9621f6f\") " pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.475102 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w57b6\" (UniqueName: \"kubernetes.io/projected/1541db70-51f2-4236-854e-6ec0f8fa3010-kube-api-access-w57b6\") pod \"route-controller-manager-6576b87f9c-zpxxv\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.475263 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2qhz\" (UniqueName: \"kubernetes.io/projected/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-kube-api-access-f2qhz\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.475419 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-etcd-serving-ca\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.475568 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-image-import-ca\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.475706 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-serving-cert\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.475868 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-config\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.476037 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjkbt\" (UniqueName: \"kubernetes.io/projected/2a0fc109-2cad-4fb3-a5ed-16c590828ed3-kube-api-access-gjkbt\") pod \"cluster-samples-operator-665b6dd947-wczzj\" (UID: \"2a0fc109-2cad-4fb3-a5ed-16c590828ed3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.479264 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.480675 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-audit-dir\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.480729 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-etcd-ca\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.480764 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.480791 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.480815 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1541db70-51f2-4236-854e-6ec0f8fa3010-config\") pod \"route-controller-manager-6576b87f9c-zpxxv\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.480836 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd1add93-d083-4ee9-b1d7-306db9621f6f-config\") pod \"console-operator-58897d9998-sfq5w\" (UID: \"fd1add93-d083-4ee9-b1d7-306db9621f6f\") " pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.480855 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1541db70-51f2-4236-854e-6ec0f8fa3010-serving-cert\") pod \"route-controller-manager-6576b87f9c-zpxxv\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.480881 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35183c23-2ddd-4984-8ba9-d86765b138ce-config\") pod \"machine-api-operator-5694c8668f-gjjx2\" (UID: \"35183c23-2ddd-4984-8ba9-d86765b138ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.480902 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.480929 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x62b\" (UniqueName: \"kubernetes.io/projected/88056822-ddb3-47aa-b15e-f344471f6b0a-kube-api-access-9x62b\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpxd4\" (UID: \"88056822-ddb3-47aa-b15e-f344471f6b0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpxd4" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.485352 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1541db70-51f2-4236-854e-6ec0f8fa3010-client-ca\") pod \"route-controller-manager-6576b87f9c-zpxxv\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.486306 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1541db70-51f2-4236-854e-6ec0f8fa3010-config\") pod \"route-controller-manager-6576b87f9c-zpxxv\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.479545 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-config\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.486379 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-audit-dir\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.489533 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-serving-cert\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495036 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5p52\" (UniqueName: \"kubernetes.io/projected/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-kube-api-access-m5p52\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495166 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495198 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495229 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-node-pullsecrets\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495258 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-audit\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495294 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13717635-b3e8-4e34-b622-a46ef9eee317-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495335 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/88056822-ddb3-47aa-b15e-f344471f6b0a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpxd4\" (UID: \"88056822-ddb3-47aa-b15e-f344471f6b0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpxd4" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495365 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495392 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-config\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495442 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-audit-policies\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495463 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495490 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-etcd-client\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495518 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495548 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhsk\" (UniqueName: \"kubernetes.io/projected/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-kube-api-access-xhhsk\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495575 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd1add93-d083-4ee9-b1d7-306db9621f6f-serving-cert\") pod \"console-operator-58897d9998-sfq5w\" (UID: \"fd1add93-d083-4ee9-b1d7-306db9621f6f\") " pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495596 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13717635-b3e8-4e34-b622-a46ef9eee317-config\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495618 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-client-ca\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495648 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f8e68876-7e42-40f7-acc3-4cb527be5e06-audit-dir\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495672 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d4fa96d4-7e66-41b2-8073-2fc131612225-auth-proxy-config\") pod \"machine-approver-56656f9798-bvv8r\" (UID: \"d4fa96d4-7e66-41b2-8073-2fc131612225\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495692 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/35183c23-2ddd-4984-8ba9-d86765b138ce-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gjjx2\" (UID: \"35183c23-2ddd-4984-8ba9-d86765b138ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495714 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lks5b\" (UniqueName: \"kubernetes.io/projected/fd1add93-d083-4ee9-b1d7-306db9621f6f-kube-api-access-lks5b\") pod \"console-operator-58897d9998-sfq5w\" (UID: \"fd1add93-d083-4ee9-b1d7-306db9621f6f\") " pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495744 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.495769 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a0fc109-2cad-4fb3-a5ed-16c590828ed3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wczzj\" (UID: \"2a0fc109-2cad-4fb3-a5ed-16c590828ed3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.496884 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd1add93-d083-4ee9-b1d7-306db9621f6f-config\") pod \"console-operator-58897d9998-sfq5w\" (UID: \"fd1add93-d083-4ee9-b1d7-306db9621f6f\") " pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.498525 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.498677 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd1add93-d083-4ee9-b1d7-306db9621f6f-trusted-ca\") pod \"console-operator-58897d9998-sfq5w\" (UID: \"fd1add93-d083-4ee9-b1d7-306db9621f6f\") " pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.473479 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-d22vd"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.478302 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/35183c23-2ddd-4984-8ba9-d86765b138ce-images\") pod \"machine-api-operator-5694c8668f-gjjx2\" (UID: \"35183c23-2ddd-4984-8ba9-d86765b138ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.500338 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-audit-policies\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.500421 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4fa96d4-7e66-41b2-8073-2fc131612225-config\") pod \"machine-approver-56656f9798-bvv8r\" (UID: \"d4fa96d4-7e66-41b2-8073-2fc131612225\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.500991 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35183c23-2ddd-4984-8ba9-d86765b138ce-config\") pod \"machine-api-operator-5694c8668f-gjjx2\" (UID: \"35183c23-2ddd-4984-8ba9-d86765b138ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.501099 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.474560 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.503833 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nws44"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.503877 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-msbqs"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.503893 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.499288 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-etcd-serving-ca\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.503937 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f8e68876-7e42-40f7-acc3-4cb527be5e06-audit-dir\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.504081 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-d22vd" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.504264 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-node-pullsecrets\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.504333 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d4fa96d4-7e66-41b2-8073-2fc131612225-auth-proxy-config\") pod \"machine-approver-56656f9798-bvv8r\" (UID: \"d4fa96d4-7e66-41b2-8073-2fc131612225\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.505236 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.509934 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-image-import-ca\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.510308 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-audit\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.510361 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.511074 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.511835 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.513464 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mdbh6"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.515469 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tqdqx"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.516336 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.516348 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.516604 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.516749 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/35183c23-2ddd-4984-8ba9-d86765b138ce-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gjjx2\" (UID: \"35183c23-2ddd-4984-8ba9-d86765b138ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.518891 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d4fa96d4-7e66-41b2-8073-2fc131612225-machine-approver-tls\") pod \"machine-approver-56656f9798-bvv8r\" (UID: \"d4fa96d4-7e66-41b2-8073-2fc131612225\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.519494 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd1add93-d083-4ee9-b1d7-306db9621f6f-serving-cert\") pod \"console-operator-58897d9998-sfq5w\" (UID: \"fd1add93-d083-4ee9-b1d7-306db9621f6f\") " pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.519596 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.519681 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.519759 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-encryption-config\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.519837 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.519867 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-etcd-client\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.520832 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1541db70-51f2-4236-854e-6ec0f8fa3010-serving-cert\") pod \"route-controller-manager-6576b87f9c-zpxxv\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.524706 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.524782 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kn6rz"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.524838 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.528799 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.530279 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.532144 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.533871 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.537029 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gjjx2"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.540510 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.541878 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.543598 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4fmqt"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.544628 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.546400 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-z9c8s"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.548210 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-q9xv9"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.548453 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.548864 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q9xv9" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.549053 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q9xv9"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.550339 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-z9c8s"] Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.554136 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.574781 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.595341 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.596657 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.596713 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x62b\" (UniqueName: \"kubernetes.io/projected/88056822-ddb3-47aa-b15e-f344471f6b0a-kube-api-access-9x62b\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpxd4\" (UID: \"88056822-ddb3-47aa-b15e-f344471f6b0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpxd4" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.596748 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5p52\" (UniqueName: \"kubernetes.io/projected/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-kube-api-access-m5p52\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.596767 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13717635-b3e8-4e34-b622-a46ef9eee317-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.596790 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/88056822-ddb3-47aa-b15e-f344471f6b0a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpxd4\" (UID: \"88056822-ddb3-47aa-b15e-f344471f6b0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpxd4" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.596816 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-config\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.596872 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-etcd-client\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.596905 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13717635-b3e8-4e34-b622-a46ef9eee317-config\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.596928 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-client-ca\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.596991 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a0fc109-2cad-4fb3-a5ed-16c590828ed3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wczzj\" (UID: \"2a0fc109-2cad-4fb3-a5ed-16c590828ed3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.597019 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tqkj\" (UniqueName: \"kubernetes.io/projected/13717635-b3e8-4e34-b622-a46ef9eee317-kube-api-access-5tqkj\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.597046 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13717635-b3e8-4e34-b622-a46ef9eee317-serving-cert\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.597093 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-etcd-service-ca\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.597117 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13717635-b3e8-4e34-b622-a46ef9eee317-service-ca-bundle\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.597147 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-serving-cert\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.597172 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2qhz\" (UniqueName: \"kubernetes.io/projected/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-kube-api-access-f2qhz\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.597189 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-serving-cert\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.597205 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-config\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.597222 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjkbt\" (UniqueName: \"kubernetes.io/projected/2a0fc109-2cad-4fb3-a5ed-16c590828ed3-kube-api-access-gjkbt\") pod \"cluster-samples-operator-665b6dd947-wczzj\" (UID: \"2a0fc109-2cad-4fb3-a5ed-16c590828ed3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.597247 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-etcd-ca\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.598169 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-etcd-ca\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.598243 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13717635-b3e8-4e34-b622-a46ef9eee317-config\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.598699 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-etcd-service-ca\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.599018 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-config\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.599003 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-client-ca\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.599352 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13717635-b3e8-4e34-b622-a46ef9eee317-service-ca-bundle\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.599582 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-config\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.600445 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a0fc109-2cad-4fb3-a5ed-16c590828ed3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wczzj\" (UID: \"2a0fc109-2cad-4fb3-a5ed-16c590828ed3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.600448 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-etcd-client\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.600609 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.600678 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13717635-b3e8-4e34-b622-a46ef9eee317-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.601613 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-serving-cert\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.601643 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-serving-cert\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.602168 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13717635-b3e8-4e34-b622-a46ef9eee317-serving-cert\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.614353 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.634564 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.654672 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.682271 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.694493 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.715100 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.735339 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.754553 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.775153 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.794750 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.814632 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.835431 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.854570 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.875099 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.895918 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.914506 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.935142 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.953809 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.974077 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 03 10:55:25 crc kubenswrapper[4756]: I1203 10:55:25.996574 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.014159 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.034558 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.056155 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.062070 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/88056822-ddb3-47aa-b15e-f344471f6b0a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpxd4\" (UID: \"88056822-ddb3-47aa-b15e-f344471f6b0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpxd4" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.075015 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.095606 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.114302 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.134594 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.153805 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.174714 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.195546 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.215703 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.236080 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.256330 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.275077 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.295632 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.314807 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.334634 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.354498 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.374829 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.392304 4756 request.go:700] Waited for 1.012362165s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&limit=500&resourceVersion=0 Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.396450 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.416094 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.443376 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.455704 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.475826 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.495689 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.514364 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.535081 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.555675 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.574499 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.594686 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.634336 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.654784 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.674636 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.696441 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.716139 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.735530 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.754787 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.776540 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.795828 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.815089 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.835625 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.855011 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.874229 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.894684 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.915628 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.935294 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.955553 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.975695 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 10:55:26 crc kubenswrapper[4756]: I1203 10:55:26.994250 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.014708 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.034681 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.054317 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.074798 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.137778 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwbfg\" (UniqueName: \"kubernetes.io/projected/f8e68876-7e42-40f7-acc3-4cb527be5e06-kube-api-access-pwbfg\") pod \"oauth-openshift-558db77b4-6dgdb\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.157130 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w57b6\" (UniqueName: \"kubernetes.io/projected/1541db70-51f2-4236-854e-6ec0f8fa3010-kube-api-access-w57b6\") pod \"route-controller-manager-6576b87f9c-zpxxv\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.166436 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.181250 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.184225 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq8k8\" (UniqueName: \"kubernetes.io/projected/d4fa96d4-7e66-41b2-8073-2fc131612225-kube-api-access-sq8k8\") pod \"machine-approver-56656f9798-bvv8r\" (UID: \"d4fa96d4-7e66-41b2-8073-2fc131612225\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.204653 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpv2l\" (UniqueName: \"kubernetes.io/projected/35183c23-2ddd-4984-8ba9-d86765b138ce-kube-api-access-lpv2l\") pod \"machine-api-operator-5694c8668f-gjjx2\" (UID: \"35183c23-2ddd-4984-8ba9-d86765b138ce\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.215759 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.217606 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhsk\" (UniqueName: \"kubernetes.io/projected/5ca956b0-0bf9-4c47-97fb-24e5141cf2bf-kube-api-access-xhhsk\") pod \"apiserver-76f77b778f-xndxw\" (UID: \"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf\") " pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.234716 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.255905 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.295822 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.295946 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lks5b\" (UniqueName: \"kubernetes.io/projected/fd1add93-d083-4ee9-b1d7-306db9621f6f-kube-api-access-lks5b\") pod \"console-operator-58897d9998-sfq5w\" (UID: \"fd1add93-d083-4ee9-b1d7-306db9621f6f\") " pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.313788 4756 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.332073 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.334350 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.354604 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.365213 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.374494 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.385198 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv"] Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.392333 4756 request.go:700] Waited for 1.843050304s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Dec 03 10:55:27 crc kubenswrapper[4756]: W1203 10:55:27.393038 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1541db70_51f2_4236_854e_6ec0f8fa3010.slice/crio-224215c7da5b93915bfb2a3adc215443ed141b651341533f77dc68b4e4cc1bfd WatchSource:0}: Error finding container 224215c7da5b93915bfb2a3adc215443ed141b651341533f77dc68b4e4cc1bfd: Status 404 returned error can't find the container with id 224215c7da5b93915bfb2a3adc215443ed141b651341533f77dc68b4e4cc1bfd Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.394311 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.398104 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.417804 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.418773 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dgdb"] Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.435245 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.452110 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x62b\" (UniqueName: \"kubernetes.io/projected/88056822-ddb3-47aa-b15e-f344471f6b0a-kube-api-access-9x62b\") pod \"control-plane-machine-set-operator-78cbb6b69f-kpxd4\" (UID: \"88056822-ddb3-47aa-b15e-f344471f6b0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpxd4" Dec 03 10:55:27 crc kubenswrapper[4756]: W1203 10:55:27.454180 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8e68876_7e42_40f7_acc3_4cb527be5e06.slice/crio-b5ec3636f362a13ecec9e94e02a7d48a9ae15bac7623bdc8379c1904bff7a5d6 WatchSource:0}: Error finding container b5ec3636f362a13ecec9e94e02a7d48a9ae15bac7623bdc8379c1904bff7a5d6: Status 404 returned error can't find the container with id b5ec3636f362a13ecec9e94e02a7d48a9ae15bac7623bdc8379c1904bff7a5d6 Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.463128 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpxd4" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.473191 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5p52\" (UniqueName: \"kubernetes.io/projected/c24f39cb-b5ef-45f3-99cc-c30786f9c55c-kube-api-access-m5p52\") pod \"etcd-operator-b45778765-k5dmg\" (UID: \"c24f39cb-b5ef-45f3-99cc-c30786f9c55c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.497107 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tqkj\" (UniqueName: \"kubernetes.io/projected/13717635-b3e8-4e34-b622-a46ef9eee317-kube-api-access-5tqkj\") pod \"authentication-operator-69f744f599-5sxx7\" (UID: \"13717635-b3e8-4e34-b622-a46ef9eee317\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.506829 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.510679 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjkbt\" (UniqueName: \"kubernetes.io/projected/2a0fc109-2cad-4fb3-a5ed-16c590828ed3-kube-api-access-gjkbt\") pod \"cluster-samples-operator-665b6dd947-wczzj\" (UID: \"2a0fc109-2cad-4fb3-a5ed-16c590828ed3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.533561 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2qhz\" (UniqueName: \"kubernetes.io/projected/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-kube-api-access-f2qhz\") pod \"controller-manager-879f6c89f-t4rfh\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.602438 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gjjx2"] Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.627669 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-bound-sa-token\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.627708 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-trusted-ca-bundle\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.627725 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzwr7\" (UniqueName: \"kubernetes.io/projected/8b49366b-8abe-4dde-b64b-fc5d34106174-kube-api-access-nzwr7\") pod \"machine-config-operator-74547568cd-tzhjl\" (UID: \"8b49366b-8abe-4dde-b64b-fc5d34106174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.627803 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c0e4abd5-fe26-4e86-b669-e1089fc6470f-stats-auth\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.627828 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8b49366b-8abe-4dde-b64b-fc5d34106174-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tzhjl\" (UID: \"8b49366b-8abe-4dde-b64b-fc5d34106174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.627866 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5908ba24-74aa-4480-a53e-6cb7604d168d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2t5n7\" (UID: \"5908ba24-74aa-4480-a53e-6cb7604d168d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.627892 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d6cddd35-757a-487a-afb5-d75d73224aee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7rfwn\" (UID: \"d6cddd35-757a-487a-afb5-d75d73224aee\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.627915 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/96867856-6fdb-4b8a-b19e-54bc30bcc607-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.627938 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdf32ebd-938f-4584-bc73-b6cac4407864-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nwqnq\" (UID: \"bdf32ebd-938f-4584-bc73-b6cac4407864\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628195 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7a49672-689d-4256-bba8-cf088f28e689-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hzzk5\" (UID: \"a7a49672-689d-4256-bba8-cf088f28e689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628220 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/012e3e5e-713a-46da-b9fe-60c58d0b8bd1-srv-cert\") pod \"catalog-operator-68c6474976-tjg68\" (UID: \"012e3e5e-713a-46da-b9fe-60c58d0b8bd1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628239 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-config\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628264 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7399790d-a82c-454a-aef1-10bb460bfe73-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628291 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-oauth-config\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628311 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzj9l\" (UniqueName: \"kubernetes.io/projected/7399790d-a82c-454a-aef1-10bb460bfe73-kube-api-access-kzj9l\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628330 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2tjf\" (UniqueName: \"kubernetes.io/projected/37d4397e-71b7-45f4-9d39-1ad59e4ca98d-kube-api-access-g2tjf\") pod \"service-ca-operator-777779d784-msbqs\" (UID: \"37d4397e-71b7-45f4-9d39-1ad59e4ca98d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-msbqs" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628349 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-service-ca\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628379 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f537a344-95f5-43fc-8ec6-f96cede4f461-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nccmk\" (UID: \"f537a344-95f5-43fc-8ec6-f96cede4f461\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628395 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mgnr\" (UniqueName: \"kubernetes.io/projected/d1bab4a4-a0e1-4445-9323-c3ea6c986f1d-kube-api-access-5mgnr\") pod \"ingress-operator-5b745b69d9-9mtks\" (UID: \"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628412 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cmbl\" (UniqueName: \"kubernetes.io/projected/618da479-bdd6-48c0-8d67-4d08154c9209-kube-api-access-8cmbl\") pod \"dns-operator-744455d44c-xkdl4\" (UID: \"618da479-bdd6-48c0-8d67-4d08154c9209\") " pod="openshift-dns-operator/dns-operator-744455d44c-xkdl4" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628452 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6tgw\" (UniqueName: \"kubernetes.io/projected/3ed2177c-9a5f-40c2-a7a5-559f4444548d-kube-api-access-j6tgw\") pod \"downloads-7954f5f757-nr6nx\" (UID: \"3ed2177c-9a5f-40c2-a7a5-559f4444548d\") " pod="openshift-console/downloads-7954f5f757-nr6nx" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628470 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpqdj\" (UniqueName: \"kubernetes.io/projected/bdf32ebd-938f-4584-bc73-b6cac4407864-kube-api-access-dpqdj\") pod \"cluster-image-registry-operator-dc59b4c8b-nwqnq\" (UID: \"bdf32ebd-938f-4584-bc73-b6cac4407864\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628486 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-serving-cert\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628505 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0e4abd5-fe26-4e86-b669-e1089fc6470f-metrics-certs\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628538 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a49672-689d-4256-bba8-cf088f28e689-config\") pod \"kube-controller-manager-operator-78b949d7b-hzzk5\" (UID: \"a7a49672-689d-4256-bba8-cf088f28e689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628556 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-oauth-serving-cert\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628576 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5575a3b0-de4c-492e-a3d5-1a95efb8e6ec-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlpqq\" (UID: \"5575a3b0-de4c-492e-a3d5-1a95efb8e6ec\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628593 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7a49672-689d-4256-bba8-cf088f28e689-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hzzk5\" (UID: \"a7a49672-689d-4256-bba8-cf088f28e689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628609 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5575a3b0-de4c-492e-a3d5-1a95efb8e6ec-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlpqq\" (UID: \"5575a3b0-de4c-492e-a3d5-1a95efb8e6ec\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628627 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwqkv\" (UniqueName: \"kubernetes.io/projected/d7d4a407-5935-4005-b569-0a2c596a98c9-kube-api-access-wwqkv\") pod \"packageserver-d55dfcdfc-nwj2t\" (UID: \"d7d4a407-5935-4005-b569-0a2c596a98c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628659 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8b49366b-8abe-4dde-b64b-fc5d34106174-proxy-tls\") pod \"machine-config-operator-74547568cd-tzhjl\" (UID: \"8b49366b-8abe-4dde-b64b-fc5d34106174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628689 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/96867856-6fdb-4b8a-b19e-54bc30bcc607-registry-certificates\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628707 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcd8j\" (UniqueName: \"kubernetes.io/projected/25bd25a7-d2cc-4b0c-969e-2b8cbd95b444-kube-api-access-vcd8j\") pod \"multus-admission-controller-857f4d67dd-tqdqx\" (UID: \"25bd25a7-d2cc-4b0c-969e-2b8cbd95b444\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tqdqx" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628722 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1bab4a4-a0e1-4445-9323-c3ea6c986f1d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9mtks\" (UID: \"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628739 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdf32ebd-938f-4584-bc73-b6cac4407864-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nwqnq\" (UID: \"bdf32ebd-938f-4584-bc73-b6cac4407864\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628805 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d4397e-71b7-45f4-9d39-1ad59e4ca98d-config\") pod \"service-ca-operator-777779d784-msbqs\" (UID: \"37d4397e-71b7-45f4-9d39-1ad59e4ca98d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-msbqs" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.628982 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-registry-tls\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.629003 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e4abd5-fe26-4e86-b669-e1089fc6470f-service-ca-bundle\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.629046 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v9ct\" (UniqueName: \"kubernetes.io/projected/8d80e0dc-bc00-4d25-8fbd-4eeb2976a9ba-kube-api-access-9v9ct\") pod \"migrator-59844c95c7-rdwph\" (UID: \"8d80e0dc-bc00-4d25-8fbd-4eeb2976a9ba\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rdwph" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.629064 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7d4a407-5935-4005-b569-0a2c596a98c9-tmpfs\") pod \"packageserver-d55dfcdfc-nwj2t\" (UID: \"d7d4a407-5935-4005-b569-0a2c596a98c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.629079 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skmw6\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-kube-api-access-skmw6\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.629097 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2237b661-da97-455e-b4ec-9b42fbcf6cc8-serving-cert\") pod \"openshift-config-operator-7777fb866f-nws44\" (UID: \"2237b661-da97-455e-b4ec-9b42fbcf6cc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.629128 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96867856-6fdb-4b8a-b19e-54bc30bcc607-trusted-ca\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.630809 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2g7r\" (UniqueName: \"kubernetes.io/projected/f537a344-95f5-43fc-8ec6-f96cede4f461-kube-api-access-h2g7r\") pod \"machine-config-controller-84d6567774-nccmk\" (UID: \"f537a344-95f5-43fc-8ec6-f96cede4f461\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.630851 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1bab4a4-a0e1-4445-9323-c3ea6c986f1d-trusted-ca\") pod \"ingress-operator-5b745b69d9-9mtks\" (UID: \"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.630877 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eb97fb78-fa90-43e2-b2d6-866bc7bec0a2-srv-cert\") pod \"olm-operator-6b444d44fb-pz22v\" (UID: \"eb97fb78-fa90-43e2-b2d6-866bc7bec0a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.630908 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhsql\" (UniqueName: \"kubernetes.io/projected/eb97fb78-fa90-43e2-b2d6-866bc7bec0a2-kube-api-access-jhsql\") pod \"olm-operator-6b444d44fb-pz22v\" (UID: \"eb97fb78-fa90-43e2-b2d6-866bc7bec0a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.630929 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mcw8\" (UniqueName: \"kubernetes.io/projected/d6cddd35-757a-487a-afb5-d75d73224aee-kube-api-access-2mcw8\") pod \"marketplace-operator-79b997595-7rfwn\" (UID: \"d6cddd35-757a-487a-afb5-d75d73224aee\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.630986 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7875340d-00b2-44dc-8117-0aed3f12b94d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2z4sg\" (UID: \"7875340d-00b2-44dc-8117-0aed3f12b94d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.631022 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7399790d-a82c-454a-aef1-10bb460bfe73-encryption-config\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.631079 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7875340d-00b2-44dc-8117-0aed3f12b94d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2z4sg\" (UID: \"7875340d-00b2-44dc-8117-0aed3f12b94d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.631101 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jngt\" (UniqueName: \"kubernetes.io/projected/0e6b1897-6a37-4181-bcb1-876d1205f2ab-kube-api-access-4jngt\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.632067 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bdf32ebd-938f-4584-bc73-b6cac4407864-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nwqnq\" (UID: \"bdf32ebd-938f-4584-bc73-b6cac4407864\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.632104 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5908ba24-74aa-4480-a53e-6cb7604d168d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2t5n7\" (UID: \"5908ba24-74aa-4480-a53e-6cb7604d168d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.632130 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hnfh\" (UniqueName: \"kubernetes.io/projected/52898585-7090-4ddc-b022-c439b71e241f-kube-api-access-2hnfh\") pod \"openshift-controller-manager-operator-756b6f6bc6-fxgqg\" (UID: \"52898585-7090-4ddc-b022-c439b71e241f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.632193 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6cddd35-757a-487a-afb5-d75d73224aee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7rfwn\" (UID: \"d6cddd35-757a-487a-afb5-d75d73224aee\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.632220 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37d4397e-71b7-45f4-9d39-1ad59e4ca98d-serving-cert\") pod \"service-ca-operator-777779d784-msbqs\" (UID: \"37d4397e-71b7-45f4-9d39-1ad59e4ca98d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-msbqs" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.632240 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7d4a407-5935-4005-b569-0a2c596a98c9-webhook-cert\") pod \"packageserver-d55dfcdfc-nwj2t\" (UID: \"d7d4a407-5935-4005-b569-0a2c596a98c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.632697 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52898585-7090-4ddc-b022-c439b71e241f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fxgqg\" (UID: \"52898585-7090-4ddc-b022-c439b71e241f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.633100 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2237b661-da97-455e-b4ec-9b42fbcf6cc8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nws44\" (UID: \"2237b661-da97-455e-b4ec-9b42fbcf6cc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.633316 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9cnl\" (UniqueName: \"kubernetes.io/projected/7875340d-00b2-44dc-8117-0aed3f12b94d-kube-api-access-g9cnl\") pod \"openshift-apiserver-operator-796bbdcf4f-2z4sg\" (UID: \"7875340d-00b2-44dc-8117-0aed3f12b94d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.633408 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52898585-7090-4ddc-b022-c439b71e241f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fxgqg\" (UID: \"52898585-7090-4ddc-b022-c439b71e241f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.633440 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7rdn\" (UniqueName: \"kubernetes.io/projected/012e3e5e-713a-46da-b9fe-60c58d0b8bd1-kube-api-access-m7rdn\") pod \"catalog-operator-68c6474976-tjg68\" (UID: \"012e3e5e-713a-46da-b9fe-60c58d0b8bd1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.633561 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7399790d-a82c-454a-aef1-10bb460bfe73-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.633602 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/012e3e5e-713a-46da-b9fe-60c58d0b8bd1-profile-collector-cert\") pod \"catalog-operator-68c6474976-tjg68\" (UID: \"012e3e5e-713a-46da-b9fe-60c58d0b8bd1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.633677 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.633750 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1bab4a4-a0e1-4445-9323-c3ea6c986f1d-metrics-tls\") pod \"ingress-operator-5b745b69d9-9mtks\" (UID: \"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.633983 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7399790d-a82c-454a-aef1-10bb460bfe73-etcd-client\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.634079 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5575a3b0-de4c-492e-a3d5-1a95efb8e6ec-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlpqq\" (UID: \"5575a3b0-de4c-492e-a3d5-1a95efb8e6ec\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq" Dec 03 10:55:27 crc kubenswrapper[4756]: E1203 10:55:27.634120 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.134085137 +0000 UTC m=+139.164086381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.634148 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/96867856-6fdb-4b8a-b19e-54bc30bcc607-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.634198 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7d4a407-5935-4005-b569-0a2c596a98c9-apiservice-cert\") pod \"packageserver-d55dfcdfc-nwj2t\" (UID: \"d7d4a407-5935-4005-b569-0a2c596a98c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.634460 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgm68\" (UniqueName: \"kubernetes.io/projected/2237b661-da97-455e-b4ec-9b42fbcf6cc8-kube-api-access-lgm68\") pod \"openshift-config-operator-7777fb866f-nws44\" (UID: \"2237b661-da97-455e-b4ec-9b42fbcf6cc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.634732 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f537a344-95f5-43fc-8ec6-f96cede4f461-proxy-tls\") pod \"machine-config-controller-84d6567774-nccmk\" (UID: \"f537a344-95f5-43fc-8ec6-f96cede4f461\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.634820 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7399790d-a82c-454a-aef1-10bb460bfe73-audit-policies\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.634943 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b49366b-8abe-4dde-b64b-fc5d34106174-images\") pod \"machine-config-operator-74547568cd-tzhjl\" (UID: \"8b49366b-8abe-4dde-b64b-fc5d34106174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.635102 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/618da479-bdd6-48c0-8d67-4d08154c9209-metrics-tls\") pod \"dns-operator-744455d44c-xkdl4\" (UID: \"618da479-bdd6-48c0-8d67-4d08154c9209\") " pod="openshift-dns-operator/dns-operator-744455d44c-xkdl4" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.635120 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/25bd25a7-d2cc-4b0c-969e-2b8cbd95b444-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tqdqx\" (UID: \"25bd25a7-d2cc-4b0c-969e-2b8cbd95b444\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tqdqx" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.635267 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5908ba24-74aa-4480-a53e-6cb7604d168d-config\") pod \"kube-apiserver-operator-766d6c64bb-2t5n7\" (UID: \"5908ba24-74aa-4480-a53e-6cb7604d168d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.635398 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7399790d-a82c-454a-aef1-10bb460bfe73-serving-cert\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.635731 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eb97fb78-fa90-43e2-b2d6-866bc7bec0a2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pz22v\" (UID: \"eb97fb78-fa90-43e2-b2d6-866bc7bec0a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.635804 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c0e4abd5-fe26-4e86-b669-e1089fc6470f-default-certificate\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.635857 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmlk9\" (UniqueName: \"kubernetes.io/projected/c0e4abd5-fe26-4e86-b669-e1089fc6470f-kube-api-access-nmlk9\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.635895 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7399790d-a82c-454a-aef1-10bb460bfe73-audit-dir\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.654543 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.671697 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sfq5w"] Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.672003 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.689696 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.737843 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:27 crc kubenswrapper[4756]: E1203 10:55:27.737981 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.237939975 +0000 UTC m=+139.267941219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.738239 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7399790d-a82c-454a-aef1-10bb460bfe73-audit-policies\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.738262 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b49366b-8abe-4dde-b64b-fc5d34106174-images\") pod \"machine-config-operator-74547568cd-tzhjl\" (UID: \"8b49366b-8abe-4dde-b64b-fc5d34106174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.738896 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b49366b-8abe-4dde-b64b-fc5d34106174-images\") pod \"machine-config-operator-74547568cd-tzhjl\" (UID: \"8b49366b-8abe-4dde-b64b-fc5d34106174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.741856 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/618da479-bdd6-48c0-8d67-4d08154c9209-metrics-tls\") pod \"dns-operator-744455d44c-xkdl4\" (UID: \"618da479-bdd6-48c0-8d67-4d08154c9209\") " pod="openshift-dns-operator/dns-operator-744455d44c-xkdl4" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.741902 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/25bd25a7-d2cc-4b0c-969e-2b8cbd95b444-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tqdqx\" (UID: \"25bd25a7-d2cc-4b0c-969e-2b8cbd95b444\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tqdqx" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.741929 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5908ba24-74aa-4480-a53e-6cb7604d168d-config\") pod \"kube-apiserver-operator-766d6c64bb-2t5n7\" (UID: \"5908ba24-74aa-4480-a53e-6cb7604d168d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.741973 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-mountpoint-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.741997 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f9rk\" (UniqueName: \"kubernetes.io/projected/9c3db407-2cd1-4a7b-9b29-26d688823fa0-kube-api-access-8f9rk\") pod \"package-server-manager-789f6589d5-kl5rb\" (UID: \"9c3db407-2cd1-4a7b-9b29-26d688823fa0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742024 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7399790d-a82c-454a-aef1-10bb460bfe73-serving-cert\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742044 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eb97fb78-fa90-43e2-b2d6-866bc7bec0a2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pz22v\" (UID: \"eb97fb78-fa90-43e2-b2d6-866bc7bec0a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742069 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c0e4abd5-fe26-4e86-b669-e1089fc6470f-default-certificate\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742091 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8272209-b1b5-4dac-88d1-ca60b3f50256-config-volume\") pod \"dns-default-4fmqt\" (UID: \"d8272209-b1b5-4dac-88d1-ca60b3f50256\") " pod="openshift-dns/dns-default-4fmqt" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742127 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmlk9\" (UniqueName: \"kubernetes.io/projected/c0e4abd5-fe26-4e86-b669-e1089fc6470f-kube-api-access-nmlk9\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742152 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7399790d-a82c-454a-aef1-10bb460bfe73-audit-dir\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742184 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-bound-sa-token\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742202 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-socket-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742240 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-trusted-ca-bundle\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742263 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzwr7\" (UniqueName: \"kubernetes.io/projected/8b49366b-8abe-4dde-b64b-fc5d34106174-kube-api-access-nzwr7\") pod \"machine-config-operator-74547568cd-tzhjl\" (UID: \"8b49366b-8abe-4dde-b64b-fc5d34106174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742273 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5sxx7"] Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742311 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c0e4abd5-fe26-4e86-b669-e1089fc6470f-stats-auth\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742331 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8b49366b-8abe-4dde-b64b-fc5d34106174-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tzhjl\" (UID: \"8b49366b-8abe-4dde-b64b-fc5d34106174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742364 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5908ba24-74aa-4480-a53e-6cb7604d168d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2t5n7\" (UID: \"5908ba24-74aa-4480-a53e-6cb7604d168d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742384 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d6cddd35-757a-487a-afb5-d75d73224aee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7rfwn\" (UID: \"d6cddd35-757a-487a-afb5-d75d73224aee\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742402 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/96867856-6fdb-4b8a-b19e-54bc30bcc607-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742444 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdf32ebd-938f-4584-bc73-b6cac4407864-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nwqnq\" (UID: \"bdf32ebd-938f-4584-bc73-b6cac4407864\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742464 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7a49672-689d-4256-bba8-cf088f28e689-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hzzk5\" (UID: \"a7a49672-689d-4256-bba8-cf088f28e689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742481 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/012e3e5e-713a-46da-b9fe-60c58d0b8bd1-srv-cert\") pod \"catalog-operator-68c6474976-tjg68\" (UID: \"012e3e5e-713a-46da-b9fe-60c58d0b8bd1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742498 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-config\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742514 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7399790d-a82c-454a-aef1-10bb460bfe73-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742532 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/96b20c46-3167-4522-a0cc-91ee6fc88b79-certs\") pod \"machine-config-server-d22vd\" (UID: \"96b20c46-3167-4522-a0cc-91ee6fc88b79\") " pod="openshift-machine-config-operator/machine-config-server-d22vd" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742564 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-oauth-config\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742582 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzj9l\" (UniqueName: \"kubernetes.io/projected/7399790d-a82c-454a-aef1-10bb460bfe73-kube-api-access-kzj9l\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742602 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2tjf\" (UniqueName: \"kubernetes.io/projected/37d4397e-71b7-45f4-9d39-1ad59e4ca98d-kube-api-access-g2tjf\") pod \"service-ca-operator-777779d784-msbqs\" (UID: \"37d4397e-71b7-45f4-9d39-1ad59e4ca98d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-msbqs" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742640 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-service-ca\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742672 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mgnr\" (UniqueName: \"kubernetes.io/projected/d1bab4a4-a0e1-4445-9323-c3ea6c986f1d-kube-api-access-5mgnr\") pod \"ingress-operator-5b745b69d9-9mtks\" (UID: \"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742714 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f537a344-95f5-43fc-8ec6-f96cede4f461-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nccmk\" (UID: \"f537a344-95f5-43fc-8ec6-f96cede4f461\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742732 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6tgw\" (UniqueName: \"kubernetes.io/projected/3ed2177c-9a5f-40c2-a7a5-559f4444548d-kube-api-access-j6tgw\") pod \"downloads-7954f5f757-nr6nx\" (UID: \"3ed2177c-9a5f-40c2-a7a5-559f4444548d\") " pod="openshift-console/downloads-7954f5f757-nr6nx" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742753 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpqdj\" (UniqueName: \"kubernetes.io/projected/bdf32ebd-938f-4584-bc73-b6cac4407864-kube-api-access-dpqdj\") pod \"cluster-image-registry-operator-dc59b4c8b-nwqnq\" (UID: \"bdf32ebd-938f-4584-bc73-b6cac4407864\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742770 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cmbl\" (UniqueName: \"kubernetes.io/projected/618da479-bdd6-48c0-8d67-4d08154c9209-kube-api-access-8cmbl\") pod \"dns-operator-744455d44c-xkdl4\" (UID: \"618da479-bdd6-48c0-8d67-4d08154c9209\") " pod="openshift-dns-operator/dns-operator-744455d44c-xkdl4" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742802 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-serving-cert\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742819 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0e4abd5-fe26-4e86-b669-e1089fc6470f-metrics-certs\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742837 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a49672-689d-4256-bba8-cf088f28e689-config\") pod \"kube-controller-manager-operator-78b949d7b-hzzk5\" (UID: \"a7a49672-689d-4256-bba8-cf088f28e689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742838 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5908ba24-74aa-4480-a53e-6cb7604d168d-config\") pod \"kube-apiserver-operator-766d6c64bb-2t5n7\" (UID: \"5908ba24-74aa-4480-a53e-6cb7604d168d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742856 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-secret-volume\") pod \"collect-profiles-29412645-fnqc4\" (UID: \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742875 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5575a3b0-de4c-492e-a3d5-1a95efb8e6ec-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlpqq\" (UID: \"5575a3b0-de4c-492e-a3d5-1a95efb8e6ec\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742898 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7a49672-689d-4256-bba8-cf088f28e689-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hzzk5\" (UID: \"a7a49672-689d-4256-bba8-cf088f28e689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742910 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7399790d-a82c-454a-aef1-10bb460bfe73-audit-dir\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742917 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-plugins-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.742990 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-oauth-serving-cert\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743019 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5575a3b0-de4c-492e-a3d5-1a95efb8e6ec-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlpqq\" (UID: \"5575a3b0-de4c-492e-a3d5-1a95efb8e6ec\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743045 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwqkv\" (UniqueName: \"kubernetes.io/projected/d7d4a407-5935-4005-b569-0a2c596a98c9-kube-api-access-wwqkv\") pod \"packageserver-d55dfcdfc-nwj2t\" (UID: \"d7d4a407-5935-4005-b569-0a2c596a98c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743072 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a85add7-3606-4006-8d35-e1dc0fb27ab1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bbtcr\" (UID: \"7a85add7-3606-4006-8d35-e1dc0fb27ab1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743116 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8b49366b-8abe-4dde-b64b-fc5d34106174-proxy-tls\") pod \"machine-config-operator-74547568cd-tzhjl\" (UID: \"8b49366b-8abe-4dde-b64b-fc5d34106174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743142 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1bab4a4-a0e1-4445-9323-c3ea6c986f1d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9mtks\" (UID: \"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743166 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dc4f\" (UniqueName: \"kubernetes.io/projected/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-kube-api-access-7dc4f\") pod \"collect-profiles-29412645-fnqc4\" (UID: \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743191 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg7bv\" (UniqueName: \"kubernetes.io/projected/96b20c46-3167-4522-a0cc-91ee6fc88b79-kube-api-access-cg7bv\") pod \"machine-config-server-d22vd\" (UID: \"96b20c46-3167-4522-a0cc-91ee6fc88b79\") " pod="openshift-machine-config-operator/machine-config-server-d22vd" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743223 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/96867856-6fdb-4b8a-b19e-54bc30bcc607-registry-certificates\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743247 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcd8j\" (UniqueName: \"kubernetes.io/projected/25bd25a7-d2cc-4b0c-969e-2b8cbd95b444-kube-api-access-vcd8j\") pod \"multus-admission-controller-857f4d67dd-tqdqx\" (UID: \"25bd25a7-d2cc-4b0c-969e-2b8cbd95b444\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tqdqx" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743273 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdf32ebd-938f-4584-bc73-b6cac4407864-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nwqnq\" (UID: \"bdf32ebd-938f-4584-bc73-b6cac4407864\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743298 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d4397e-71b7-45f4-9d39-1ad59e4ca98d-config\") pod \"service-ca-operator-777779d784-msbqs\" (UID: \"37d4397e-71b7-45f4-9d39-1ad59e4ca98d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-msbqs" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743325 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a85add7-3606-4006-8d35-e1dc0fb27ab1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bbtcr\" (UID: \"7a85add7-3606-4006-8d35-e1dc0fb27ab1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743351 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3db407-2cd1-4a7b-9b29-26d688823fa0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kl5rb\" (UID: \"9c3db407-2cd1-4a7b-9b29-26d688823fa0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743382 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d1f1dc07-828b-4d25-ada8-f69bee429206-signing-key\") pod \"service-ca-9c57cc56f-kn6rz\" (UID: \"d1f1dc07-828b-4d25-ada8-f69bee429206\") " pod="openshift-service-ca/service-ca-9c57cc56f-kn6rz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743406 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-registry-tls\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743428 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e4abd5-fe26-4e86-b669-e1089fc6470f-service-ca-bundle\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743450 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-registration-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743484 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v9ct\" (UniqueName: \"kubernetes.io/projected/8d80e0dc-bc00-4d25-8fbd-4eeb2976a9ba-kube-api-access-9v9ct\") pod \"migrator-59844c95c7-rdwph\" (UID: \"8d80e0dc-bc00-4d25-8fbd-4eeb2976a9ba\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rdwph" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743693 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7d4a407-5935-4005-b569-0a2c596a98c9-tmpfs\") pod \"packageserver-d55dfcdfc-nwj2t\" (UID: \"d7d4a407-5935-4005-b569-0a2c596a98c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743730 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96867856-6fdb-4b8a-b19e-54bc30bcc607-trusted-ca\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743855 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skmw6\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-kube-api-access-skmw6\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743883 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2237b661-da97-455e-b4ec-9b42fbcf6cc8-serving-cert\") pod \"openshift-config-operator-7777fb866f-nws44\" (UID: \"2237b661-da97-455e-b4ec-9b42fbcf6cc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743916 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqshn\" (UniqueName: \"kubernetes.io/projected/7a85add7-3606-4006-8d35-e1dc0fb27ab1-kube-api-access-vqshn\") pod \"kube-storage-version-migrator-operator-b67b599dd-bbtcr\" (UID: \"7a85add7-3606-4006-8d35-e1dc0fb27ab1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.743983 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2g7r\" (UniqueName: \"kubernetes.io/projected/f537a344-95f5-43fc-8ec6-f96cede4f461-kube-api-access-h2g7r\") pod \"machine-config-controller-84d6567774-nccmk\" (UID: \"f537a344-95f5-43fc-8ec6-f96cede4f461\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.744018 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1bab4a4-a0e1-4445-9323-c3ea6c986f1d-trusted-ca\") pod \"ingress-operator-5b745b69d9-9mtks\" (UID: \"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.744042 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/96b20c46-3167-4522-a0cc-91ee6fc88b79-node-bootstrap-token\") pod \"machine-config-server-d22vd\" (UID: \"96b20c46-3167-4522-a0cc-91ee6fc88b79\") " pod="openshift-machine-config-operator/machine-config-server-d22vd" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.744090 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eb97fb78-fa90-43e2-b2d6-866bc7bec0a2-srv-cert\") pod \"olm-operator-6b444d44fb-pz22v\" (UID: \"eb97fb78-fa90-43e2-b2d6-866bc7bec0a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.744148 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhsql\" (UniqueName: \"kubernetes.io/projected/eb97fb78-fa90-43e2-b2d6-866bc7bec0a2-kube-api-access-jhsql\") pod \"olm-operator-6b444d44fb-pz22v\" (UID: \"eb97fb78-fa90-43e2-b2d6-866bc7bec0a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.744179 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7875340d-00b2-44dc-8117-0aed3f12b94d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2z4sg\" (UID: \"7875340d-00b2-44dc-8117-0aed3f12b94d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.744210 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7399790d-a82c-454a-aef1-10bb460bfe73-encryption-config\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.744239 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mcw8\" (UniqueName: \"kubernetes.io/projected/d6cddd35-757a-487a-afb5-d75d73224aee-kube-api-access-2mcw8\") pod \"marketplace-operator-79b997595-7rfwn\" (UID: \"d6cddd35-757a-487a-afb5-d75d73224aee\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.744267 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-config-volume\") pod \"collect-profiles-29412645-fnqc4\" (UID: \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.744297 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-trusted-ca-bundle\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.744383 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7875340d-00b2-44dc-8117-0aed3f12b94d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2z4sg\" (UID: \"7875340d-00b2-44dc-8117-0aed3f12b94d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.744415 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jngt\" (UniqueName: \"kubernetes.io/projected/0e6b1897-6a37-4181-bcb1-876d1205f2ab-kube-api-access-4jngt\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.744453 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5908ba24-74aa-4480-a53e-6cb7604d168d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2t5n7\" (UID: \"5908ba24-74aa-4480-a53e-6cb7604d168d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.746512 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdf32ebd-938f-4584-bc73-b6cac4407864-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nwqnq\" (UID: \"bdf32ebd-938f-4584-bc73-b6cac4407864\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.746544 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/25bd25a7-d2cc-4b0c-969e-2b8cbd95b444-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tqdqx\" (UID: \"25bd25a7-d2cc-4b0c-969e-2b8cbd95b444\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tqdqx" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.747357 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d4397e-71b7-45f4-9d39-1ad59e4ca98d-config\") pod \"service-ca-operator-777779d784-msbqs\" (UID: \"37d4397e-71b7-45f4-9d39-1ad59e4ca98d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-msbqs" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.748538 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bdf32ebd-938f-4584-bc73-b6cac4407864-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nwqnq\" (UID: \"bdf32ebd-938f-4584-bc73-b6cac4407864\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.748587 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hnfh\" (UniqueName: \"kubernetes.io/projected/52898585-7090-4ddc-b022-c439b71e241f-kube-api-access-2hnfh\") pod \"openshift-controller-manager-operator-756b6f6bc6-fxgqg\" (UID: \"52898585-7090-4ddc-b022-c439b71e241f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.748619 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6cddd35-757a-487a-afb5-d75d73224aee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7rfwn\" (UID: \"d6cddd35-757a-487a-afb5-d75d73224aee\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.748656 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhj4v\" (UniqueName: \"kubernetes.io/projected/1fd93ab4-e1c0-456f-8741-b50c983c8a89-kube-api-access-qhj4v\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.748709 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37d4397e-71b7-45f4-9d39-1ad59e4ca98d-serving-cert\") pod \"service-ca-operator-777779d784-msbqs\" (UID: \"37d4397e-71b7-45f4-9d39-1ad59e4ca98d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-msbqs" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.748762 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52898585-7090-4ddc-b022-c439b71e241f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fxgqg\" (UID: \"52898585-7090-4ddc-b022-c439b71e241f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.748795 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7d4a407-5935-4005-b569-0a2c596a98c9-webhook-cert\") pod \"packageserver-d55dfcdfc-nwj2t\" (UID: \"d7d4a407-5935-4005-b569-0a2c596a98c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.748834 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7rv5\" (UniqueName: \"kubernetes.io/projected/d1f1dc07-828b-4d25-ada8-f69bee429206-kube-api-access-v7rv5\") pod \"service-ca-9c57cc56f-kn6rz\" (UID: \"d1f1dc07-828b-4d25-ada8-f69bee429206\") " pod="openshift-service-ca/service-ca-9c57cc56f-kn6rz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.748868 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8272209-b1b5-4dac-88d1-ca60b3f50256-metrics-tls\") pod \"dns-default-4fmqt\" (UID: \"d8272209-b1b5-4dac-88d1-ca60b3f50256\") " pod="openshift-dns/dns-default-4fmqt" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.748900 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2237b661-da97-455e-b4ec-9b42fbcf6cc8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nws44\" (UID: \"2237b661-da97-455e-b4ec-9b42fbcf6cc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749114 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9cnl\" (UniqueName: \"kubernetes.io/projected/7875340d-00b2-44dc-8117-0aed3f12b94d-kube-api-access-g9cnl\") pod \"openshift-apiserver-operator-796bbdcf4f-2z4sg\" (UID: \"7875340d-00b2-44dc-8117-0aed3f12b94d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749147 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-csi-data-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749212 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7399790d-a82c-454a-aef1-10bb460bfe73-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749241 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52898585-7090-4ddc-b022-c439b71e241f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fxgqg\" (UID: \"52898585-7090-4ddc-b022-c439b71e241f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749269 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7rdn\" (UniqueName: \"kubernetes.io/projected/012e3e5e-713a-46da-b9fe-60c58d0b8bd1-kube-api-access-m7rdn\") pod \"catalog-operator-68c6474976-tjg68\" (UID: \"012e3e5e-713a-46da-b9fe-60c58d0b8bd1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749304 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/012e3e5e-713a-46da-b9fe-60c58d0b8bd1-profile-collector-cert\") pod \"catalog-operator-68c6474976-tjg68\" (UID: \"012e3e5e-713a-46da-b9fe-60c58d0b8bd1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749334 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1bab4a4-a0e1-4445-9323-c3ea6c986f1d-metrics-tls\") pod \"ingress-operator-5b745b69d9-9mtks\" (UID: \"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749364 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rm5w\" (UniqueName: \"kubernetes.io/projected/d8272209-b1b5-4dac-88d1-ca60b3f50256-kube-api-access-4rm5w\") pod \"dns-default-4fmqt\" (UID: \"d8272209-b1b5-4dac-88d1-ca60b3f50256\") " pod="openshift-dns/dns-default-4fmqt" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749392 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gl7n\" (UniqueName: \"kubernetes.io/projected/26439aaa-817a-44d4-93fd-c634fa23617d-kube-api-access-8gl7n\") pod \"ingress-canary-q9xv9\" (UID: \"26439aaa-817a-44d4-93fd-c634fa23617d\") " pod="openshift-ingress-canary/ingress-canary-q9xv9" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749451 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749485 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7399790d-a82c-454a-aef1-10bb460bfe73-etcd-client\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749511 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d1f1dc07-828b-4d25-ada8-f69bee429206-signing-cabundle\") pod \"service-ca-9c57cc56f-kn6rz\" (UID: \"d1f1dc07-828b-4d25-ada8-f69bee429206\") " pod="openshift-service-ca/service-ca-9c57cc56f-kn6rz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749541 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5575a3b0-de4c-492e-a3d5-1a95efb8e6ec-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlpqq\" (UID: \"5575a3b0-de4c-492e-a3d5-1a95efb8e6ec\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749582 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/96867856-6fdb-4b8a-b19e-54bc30bcc607-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749613 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7d4a407-5935-4005-b569-0a2c596a98c9-apiservice-cert\") pod \"packageserver-d55dfcdfc-nwj2t\" (UID: \"d7d4a407-5935-4005-b569-0a2c596a98c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.749963 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7399790d-a82c-454a-aef1-10bb460bfe73-serving-cert\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.750400 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-serving-cert\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.750832 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/618da479-bdd6-48c0-8d67-4d08154c9209-metrics-tls\") pod \"dns-operator-744455d44c-xkdl4\" (UID: \"618da479-bdd6-48c0-8d67-4d08154c9209\") " pod="openshift-dns-operator/dns-operator-744455d44c-xkdl4" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.751764 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f537a344-95f5-43fc-8ec6-f96cede4f461-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nccmk\" (UID: \"f537a344-95f5-43fc-8ec6-f96cede4f461\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.752771 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/012e3e5e-713a-46da-b9fe-60c58d0b8bd1-srv-cert\") pod \"catalog-operator-68c6474976-tjg68\" (UID: \"012e3e5e-713a-46da-b9fe-60c58d0b8bd1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.753253 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5908ba24-74aa-4480-a53e-6cb7604d168d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2t5n7\" (UID: \"5908ba24-74aa-4480-a53e-6cb7604d168d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.753681 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2237b661-da97-455e-b4ec-9b42fbcf6cc8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nws44\" (UID: \"2237b661-da97-455e-b4ec-9b42fbcf6cc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.754516 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-config\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.754694 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c0e4abd5-fe26-4e86-b669-e1089fc6470f-stats-auth\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.754886 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c0e4abd5-fe26-4e86-b669-e1089fc6470f-metrics-certs\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.754895 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-registry-tls\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.755648 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d6cddd35-757a-487a-afb5-d75d73224aee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7rfwn\" (UID: \"d6cddd35-757a-487a-afb5-d75d73224aee\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.755666 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a49672-689d-4256-bba8-cf088f28e689-config\") pod \"kube-controller-manager-operator-78b949d7b-hzzk5\" (UID: \"a7a49672-689d-4256-bba8-cf088f28e689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.755760 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96867856-6fdb-4b8a-b19e-54bc30bcc607-trusted-ca\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.755876 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0e4abd5-fe26-4e86-b669-e1089fc6470f-service-ca-bundle\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.756251 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eb97fb78-fa90-43e2-b2d6-866bc7bec0a2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pz22v\" (UID: \"eb97fb78-fa90-43e2-b2d6-866bc7bec0a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.756305 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8b49366b-8abe-4dde-b64b-fc5d34106174-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tzhjl\" (UID: \"8b49366b-8abe-4dde-b64b-fc5d34106174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.756396 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7d4a407-5935-4005-b569-0a2c596a98c9-tmpfs\") pod \"packageserver-d55dfcdfc-nwj2t\" (UID: \"d7d4a407-5935-4005-b569-0a2c596a98c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.756548 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-oauth-serving-cert\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: E1203 10:55:27.756984 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.256946512 +0000 UTC m=+139.286947756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.757749 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-oauth-config\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.758416 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52898585-7090-4ddc-b022-c439b71e241f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fxgqg\" (UID: \"52898585-7090-4ddc-b022-c439b71e241f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.759105 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7875340d-00b2-44dc-8117-0aed3f12b94d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2z4sg\" (UID: \"7875340d-00b2-44dc-8117-0aed3f12b94d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.759356 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1bab4a4-a0e1-4445-9323-c3ea6c986f1d-trusted-ca\") pod \"ingress-operator-5b745b69d9-9mtks\" (UID: \"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.759626 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/96867856-6fdb-4b8a-b19e-54bc30bcc607-registry-certificates\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.760046 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7875340d-00b2-44dc-8117-0aed3f12b94d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2z4sg\" (UID: \"7875340d-00b2-44dc-8117-0aed3f12b94d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.760527 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/96867856-6fdb-4b8a-b19e-54bc30bcc607-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.760678 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6cddd35-757a-487a-afb5-d75d73224aee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7rfwn\" (UID: \"d6cddd35-757a-487a-afb5-d75d73224aee\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.760798 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/96867856-6fdb-4b8a-b19e-54bc30bcc607-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.760879 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgm68\" (UniqueName: \"kubernetes.io/projected/2237b661-da97-455e-b4ec-9b42fbcf6cc8-kube-api-access-lgm68\") pod \"openshift-config-operator-7777fb866f-nws44\" (UID: \"2237b661-da97-455e-b4ec-9b42fbcf6cc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.760929 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f537a344-95f5-43fc-8ec6-f96cede4f461-proxy-tls\") pod \"machine-config-controller-84d6567774-nccmk\" (UID: \"f537a344-95f5-43fc-8ec6-f96cede4f461\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.760975 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26439aaa-817a-44d4-93fd-c634fa23617d-cert\") pod \"ingress-canary-q9xv9\" (UID: \"26439aaa-817a-44d4-93fd-c634fa23617d\") " pod="openshift-ingress-canary/ingress-canary-q9xv9" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.761623 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c0e4abd5-fe26-4e86-b669-e1089fc6470f-default-certificate\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.761677 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bdf32ebd-938f-4584-bc73-b6cac4407864-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nwqnq\" (UID: \"bdf32ebd-938f-4584-bc73-b6cac4407864\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.765713 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2237b661-da97-455e-b4ec-9b42fbcf6cc8-serving-cert\") pod \"openshift-config-operator-7777fb866f-nws44\" (UID: \"2237b661-da97-455e-b4ec-9b42fbcf6cc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.766034 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7399790d-a82c-454a-aef1-10bb460bfe73-etcd-client\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.766165 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7d4a407-5935-4005-b569-0a2c596a98c9-webhook-cert\") pod \"packageserver-d55dfcdfc-nwj2t\" (UID: \"d7d4a407-5935-4005-b569-0a2c596a98c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.766322 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37d4397e-71b7-45f4-9d39-1ad59e4ca98d-serving-cert\") pod \"service-ca-operator-777779d784-msbqs\" (UID: \"37d4397e-71b7-45f4-9d39-1ad59e4ca98d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-msbqs" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.766324 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eb97fb78-fa90-43e2-b2d6-866bc7bec0a2-srv-cert\") pod \"olm-operator-6b444d44fb-pz22v\" (UID: \"eb97fb78-fa90-43e2-b2d6-866bc7bec0a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.766449 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7399790d-a82c-454a-aef1-10bb460bfe73-encryption-config\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.766689 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1bab4a4-a0e1-4445-9323-c3ea6c986f1d-metrics-tls\") pod \"ingress-operator-5b745b69d9-9mtks\" (UID: \"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.766711 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5575a3b0-de4c-492e-a3d5-1a95efb8e6ec-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlpqq\" (UID: \"5575a3b0-de4c-492e-a3d5-1a95efb8e6ec\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.766968 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7d4a407-5935-4005-b569-0a2c596a98c9-apiservice-cert\") pod \"packageserver-d55dfcdfc-nwj2t\" (UID: \"d7d4a407-5935-4005-b569-0a2c596a98c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.767529 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f537a344-95f5-43fc-8ec6-f96cede4f461-proxy-tls\") pod \"machine-config-controller-84d6567774-nccmk\" (UID: \"f537a344-95f5-43fc-8ec6-f96cede4f461\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.768017 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7a49672-689d-4256-bba8-cf088f28e689-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hzzk5\" (UID: \"a7a49672-689d-4256-bba8-cf088f28e689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.769170 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52898585-7090-4ddc-b022-c439b71e241f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fxgqg\" (UID: \"52898585-7090-4ddc-b022-c439b71e241f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.769611 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/012e3e5e-713a-46da-b9fe-60c58d0b8bd1-profile-collector-cert\") pod \"catalog-operator-68c6474976-tjg68\" (UID: \"012e3e5e-713a-46da-b9fe-60c58d0b8bd1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.778759 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-bound-sa-token\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.785618 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpxd4"] Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.788984 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmlk9\" (UniqueName: \"kubernetes.io/projected/c0e4abd5-fe26-4e86-b669-e1089fc6470f-kube-api-access-nmlk9\") pod \"router-default-5444994796-827dv\" (UID: \"c0e4abd5-fe26-4e86-b669-e1089fc6470f\") " pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.809999 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzwr7\" (UniqueName: \"kubernetes.io/projected/8b49366b-8abe-4dde-b64b-fc5d34106174-kube-api-access-nzwr7\") pod \"machine-config-operator-74547568cd-tzhjl\" (UID: \"8b49366b-8abe-4dde-b64b-fc5d34106174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.816610 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.830872 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2tjf\" (UniqueName: \"kubernetes.io/projected/37d4397e-71b7-45f4-9d39-1ad59e4ca98d-kube-api-access-g2tjf\") pod \"service-ca-operator-777779d784-msbqs\" (UID: \"37d4397e-71b7-45f4-9d39-1ad59e4ca98d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-msbqs" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.834291 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xndxw"] Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.856000 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cmbl\" (UniqueName: \"kubernetes.io/projected/618da479-bdd6-48c0-8d67-4d08154c9209-kube-api-access-8cmbl\") pod \"dns-operator-744455d44c-xkdl4\" (UID: \"618da479-bdd6-48c0-8d67-4d08154c9209\") " pod="openshift-dns-operator/dns-operator-744455d44c-xkdl4" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.862139 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:27 crc kubenswrapper[4756]: E1203 10:55:27.862318 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.362285339 +0000 UTC m=+139.392286583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.862657 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26439aaa-817a-44d4-93fd-c634fa23617d-cert\") pod \"ingress-canary-q9xv9\" (UID: \"26439aaa-817a-44d4-93fd-c634fa23617d\") " pod="openshift-ingress-canary/ingress-canary-q9xv9" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.862819 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-mountpoint-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.862930 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f9rk\" (UniqueName: \"kubernetes.io/projected/9c3db407-2cd1-4a7b-9b29-26d688823fa0-kube-api-access-8f9rk\") pod \"package-server-manager-789f6589d5-kl5rb\" (UID: \"9c3db407-2cd1-4a7b-9b29-26d688823fa0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.863074 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8272209-b1b5-4dac-88d1-ca60b3f50256-config-volume\") pod \"dns-default-4fmqt\" (UID: \"d8272209-b1b5-4dac-88d1-ca60b3f50256\") " pod="openshift-dns/dns-default-4fmqt" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.862939 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-mountpoint-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.863274 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-socket-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.863505 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/96b20c46-3167-4522-a0cc-91ee6fc88b79-certs\") pod \"machine-config-server-d22vd\" (UID: \"96b20c46-3167-4522-a0cc-91ee6fc88b79\") " pod="openshift-machine-config-operator/machine-config-server-d22vd" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.863675 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-socket-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.863823 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-secret-volume\") pod \"collect-profiles-29412645-fnqc4\" (UID: \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.864018 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-plugins-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.864157 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a85add7-3606-4006-8d35-e1dc0fb27ab1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bbtcr\" (UID: \"7a85add7-3606-4006-8d35-e1dc0fb27ab1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.864439 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dc4f\" (UniqueName: \"kubernetes.io/projected/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-kube-api-access-7dc4f\") pod \"collect-profiles-29412645-fnqc4\" (UID: \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.864590 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg7bv\" (UniqueName: \"kubernetes.io/projected/96b20c46-3167-4522-a0cc-91ee6fc88b79-kube-api-access-cg7bv\") pod \"machine-config-server-d22vd\" (UID: \"96b20c46-3167-4522-a0cc-91ee6fc88b79\") " pod="openshift-machine-config-operator/machine-config-server-d22vd" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.864746 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a85add7-3606-4006-8d35-e1dc0fb27ab1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bbtcr\" (UID: \"7a85add7-3606-4006-8d35-e1dc0fb27ab1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.864847 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a85add7-3606-4006-8d35-e1dc0fb27ab1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bbtcr\" (UID: \"7a85add7-3606-4006-8d35-e1dc0fb27ab1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.864091 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-plugins-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.864053 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8272209-b1b5-4dac-88d1-ca60b3f50256-config-volume\") pod \"dns-default-4fmqt\" (UID: \"d8272209-b1b5-4dac-88d1-ca60b3f50256\") " pod="openshift-dns/dns-default-4fmqt" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.865139 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3db407-2cd1-4a7b-9b29-26d688823fa0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kl5rb\" (UID: \"9c3db407-2cd1-4a7b-9b29-26d688823fa0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.865271 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d1f1dc07-828b-4d25-ada8-f69bee429206-signing-key\") pod \"service-ca-9c57cc56f-kn6rz\" (UID: \"d1f1dc07-828b-4d25-ada8-f69bee429206\") " pod="openshift-service-ca/service-ca-9c57cc56f-kn6rz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.865602 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-registration-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.865778 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-registration-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.865940 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqshn\" (UniqueName: \"kubernetes.io/projected/7a85add7-3606-4006-8d35-e1dc0fb27ab1-kube-api-access-vqshn\") pod \"kube-storage-version-migrator-operator-b67b599dd-bbtcr\" (UID: \"7a85add7-3606-4006-8d35-e1dc0fb27ab1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.866154 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/96b20c46-3167-4522-a0cc-91ee6fc88b79-node-bootstrap-token\") pod \"machine-config-server-d22vd\" (UID: \"96b20c46-3167-4522-a0cc-91ee6fc88b79\") " pod="openshift-machine-config-operator/machine-config-server-d22vd" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.866370 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-config-volume\") pod \"collect-profiles-29412645-fnqc4\" (UID: \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.866582 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhj4v\" (UniqueName: \"kubernetes.io/projected/1fd93ab4-e1c0-456f-8741-b50c983c8a89-kube-api-access-qhj4v\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.866642 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26439aaa-817a-44d4-93fd-c634fa23617d-cert\") pod \"ingress-canary-q9xv9\" (UID: \"26439aaa-817a-44d4-93fd-c634fa23617d\") " pod="openshift-ingress-canary/ingress-canary-q9xv9" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.866879 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8272209-b1b5-4dac-88d1-ca60b3f50256-metrics-tls\") pod \"dns-default-4fmqt\" (UID: \"d8272209-b1b5-4dac-88d1-ca60b3f50256\") " pod="openshift-dns/dns-default-4fmqt" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.867046 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7rv5\" (UniqueName: \"kubernetes.io/projected/d1f1dc07-828b-4d25-ada8-f69bee429206-kube-api-access-v7rv5\") pod \"service-ca-9c57cc56f-kn6rz\" (UID: \"d1f1dc07-828b-4d25-ada8-f69bee429206\") " pod="openshift-service-ca/service-ca-9c57cc56f-kn6rz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.867199 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-csi-data-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.867395 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.867566 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rm5w\" (UniqueName: \"kubernetes.io/projected/d8272209-b1b5-4dac-88d1-ca60b3f50256-kube-api-access-4rm5w\") pod \"dns-default-4fmqt\" (UID: \"d8272209-b1b5-4dac-88d1-ca60b3f50256\") " pod="openshift-dns/dns-default-4fmqt" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.867459 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpqdj\" (UniqueName: \"kubernetes.io/projected/bdf32ebd-938f-4584-bc73-b6cac4407864-kube-api-access-dpqdj\") pod \"cluster-image-registry-operator-dc59b4c8b-nwqnq\" (UID: \"bdf32ebd-938f-4584-bc73-b6cac4407864\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" Dec 03 10:55:27 crc kubenswrapper[4756]: E1203 10:55:27.867772 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.367761614 +0000 UTC m=+139.397762858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.867440 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1fd93ab4-e1c0-456f-8741-b50c983c8a89-csi-data-dir\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.867769 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gl7n\" (UniqueName: \"kubernetes.io/projected/26439aaa-817a-44d4-93fd-c634fa23617d-kube-api-access-8gl7n\") pod \"ingress-canary-q9xv9\" (UID: \"26439aaa-817a-44d4-93fd-c634fa23617d\") " pod="openshift-ingress-canary/ingress-canary-q9xv9" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.869044 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d1f1dc07-828b-4d25-ada8-f69bee429206-signing-cabundle\") pod \"service-ca-9c57cc56f-kn6rz\" (UID: \"d1f1dc07-828b-4d25-ada8-f69bee429206\") " pod="openshift-service-ca/service-ca-9c57cc56f-kn6rz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.869654 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3db407-2cd1-4a7b-9b29-26d688823fa0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kl5rb\" (UID: \"9c3db407-2cd1-4a7b-9b29-26d688823fa0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.870001 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d1f1dc07-828b-4d25-ada8-f69bee429206-signing-cabundle\") pod \"service-ca-9c57cc56f-kn6rz\" (UID: \"d1f1dc07-828b-4d25-ada8-f69bee429206\") " pod="openshift-service-ca/service-ca-9c57cc56f-kn6rz" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.870793 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-secret-volume\") pod \"collect-profiles-29412645-fnqc4\" (UID: \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.871185 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8272209-b1b5-4dac-88d1-ca60b3f50256-metrics-tls\") pod \"dns-default-4fmqt\" (UID: \"d8272209-b1b5-4dac-88d1-ca60b3f50256\") " pod="openshift-dns/dns-default-4fmqt" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.939841 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mgnr\" (UniqueName: \"kubernetes.io/projected/d1bab4a4-a0e1-4445-9323-c3ea6c986f1d-kube-api-access-5mgnr\") pod \"ingress-operator-5b745b69d9-9mtks\" (UID: \"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.951244 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v9ct\" (UniqueName: \"kubernetes.io/projected/8d80e0dc-bc00-4d25-8fbd-4eeb2976a9ba-kube-api-access-9v9ct\") pod \"migrator-59844c95c7-rdwph\" (UID: \"8d80e0dc-bc00-4d25-8fbd-4eeb2976a9ba\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rdwph" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.963881 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rdwph" Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.976110 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:27 crc kubenswrapper[4756]: E1203 10:55:27.976363 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.476312461 +0000 UTC m=+139.506313705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.976491 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:27 crc kubenswrapper[4756]: E1203 10:55:27.977792 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.477772808 +0000 UTC m=+139.507774052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:27 crc kubenswrapper[4756]: I1203 10:55:27.983920 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5908ba24-74aa-4480-a53e-6cb7604d168d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2t5n7\" (UID: \"5908ba24-74aa-4480-a53e-6cb7604d168d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.000477 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.002605 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7a49672-689d-4256-bba8-cf088f28e689-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hzzk5\" (UID: \"a7a49672-689d-4256-bba8-cf088f28e689\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.020069 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xkdl4" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.022651 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzj9l\" (UniqueName: \"kubernetes.io/projected/7399790d-a82c-454a-aef1-10bb460bfe73-kube-api-access-kzj9l\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.029132 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhsql\" (UniqueName: \"kubernetes.io/projected/eb97fb78-fa90-43e2-b2d6-866bc7bec0a2-kube-api-access-jhsql\") pod \"olm-operator-6b444d44fb-pz22v\" (UID: \"eb97fb78-fa90-43e2-b2d6-866bc7bec0a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.061846 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6tgw\" (UniqueName: \"kubernetes.io/projected/3ed2177c-9a5f-40c2-a7a5-559f4444548d-kube-api-access-j6tgw\") pod \"downloads-7954f5f757-nr6nx\" (UID: \"3ed2177c-9a5f-40c2-a7a5-559f4444548d\") " pod="openshift-console/downloads-7954f5f757-nr6nx" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.077600 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.077855 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.577817295 +0000 UTC m=+139.607818579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.078762 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.079230 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.579213229 +0000 UTC m=+139.609214473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.088410 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2g7r\" (UniqueName: \"kubernetes.io/projected/f537a344-95f5-43fc-8ec6-f96cede4f461-kube-api-access-h2g7r\") pod \"machine-config-controller-84d6567774-nccmk\" (UID: \"f537a344-95f5-43fc-8ec6-f96cede4f461\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.088651 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nr6nx" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.100540 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-msbqs" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.102212 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skmw6\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-kube-api-access-skmw6\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.107893 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.123553 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcd8j\" (UniqueName: \"kubernetes.io/projected/25bd25a7-d2cc-4b0c-969e-2b8cbd95b444-kube-api-access-vcd8j\") pod \"multus-admission-controller-857f4d67dd-tqdqx\" (UID: \"25bd25a7-d2cc-4b0c-969e-2b8cbd95b444\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tqdqx" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.139878 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdf32ebd-938f-4584-bc73-b6cac4407864-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nwqnq\" (UID: \"bdf32ebd-938f-4584-bc73-b6cac4407864\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.159389 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1bab4a4-a0e1-4445-9323-c3ea6c986f1d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9mtks\" (UID: \"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.161007 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.179907 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.180083 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.68004672 +0000 UTC m=+139.710047974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.180369 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.180877 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.680863837 +0000 UTC m=+139.710865101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.181915 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwqkv\" (UniqueName: \"kubernetes.io/projected/d7d4a407-5935-4005-b569-0a2c596a98c9-kube-api-access-wwqkv\") pod \"packageserver-d55dfcdfc-nwj2t\" (UID: \"d7d4a407-5935-4005-b569-0a2c596a98c9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.203767 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7rdn\" (UniqueName: \"kubernetes.io/projected/012e3e5e-713a-46da-b9fe-60c58d0b8bd1-kube-api-access-m7rdn\") pod \"catalog-operator-68c6474976-tjg68\" (UID: \"012e3e5e-713a-46da-b9fe-60c58d0b8bd1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.219882 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mcw8\" (UniqueName: \"kubernetes.io/projected/d6cddd35-757a-487a-afb5-d75d73224aee-kube-api-access-2mcw8\") pod \"marketplace-operator-79b997595-7rfwn\" (UID: \"d6cddd35-757a-487a-afb5-d75d73224aee\") " pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.242276 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5575a3b0-de4c-492e-a3d5-1a95efb8e6ec-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlpqq\" (UID: \"5575a3b0-de4c-492e-a3d5-1a95efb8e6ec\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.259871 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hnfh\" (UniqueName: \"kubernetes.io/projected/52898585-7090-4ddc-b022-c439b71e241f-kube-api-access-2hnfh\") pod \"openshift-controller-manager-operator-756b6f6bc6-fxgqg\" (UID: \"52898585-7090-4ddc-b022-c439b71e241f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.280010 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.280125 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9cnl\" (UniqueName: \"kubernetes.io/projected/7875340d-00b2-44dc-8117-0aed3f12b94d-kube-api-access-g9cnl\") pod \"openshift-apiserver-operator-796bbdcf4f-2z4sg\" (UID: \"7875340d-00b2-44dc-8117-0aed3f12b94d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.281791 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.281946 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.781912855 +0000 UTC m=+139.811914099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.282472 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.282825 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.782816624 +0000 UTC m=+139.812817868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.312741 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.321933 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f9rk\" (UniqueName: \"kubernetes.io/projected/9c3db407-2cd1-4a7b-9b29-26d688823fa0-kube-api-access-8f9rk\") pod \"package-server-manager-789f6589d5-kl5rb\" (UID: \"9c3db407-2cd1-4a7b-9b29-26d688823fa0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.326366 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tqdqx" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.345570 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dc4f\" (UniqueName: \"kubernetes.io/projected/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-kube-api-access-7dc4f\") pod \"collect-profiles-29412645-fnqc4\" (UID: \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.348268 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.364063 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg7bv\" (UniqueName: \"kubernetes.io/projected/96b20c46-3167-4522-a0cc-91ee6fc88b79-kube-api-access-cg7bv\") pod \"machine-config-server-d22vd\" (UID: \"96b20c46-3167-4522-a0cc-91ee6fc88b79\") " pod="openshift-machine-config-operator/machine-config-server-d22vd" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.377914 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.384077 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.384289 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.884256815 +0000 UTC m=+139.914258089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.384916 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.385490 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.885472084 +0000 UTC m=+139.915473368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.385717 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.392228 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.416261 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7rv5\" (UniqueName: \"kubernetes.io/projected/d1f1dc07-828b-4d25-ada8-f69bee429206-kube-api-access-v7rv5\") pod \"service-ca-9c57cc56f-kn6rz\" (UID: \"d1f1dc07-828b-4d25-ada8-f69bee429206\") " pod="openshift-service-ca/service-ca-9c57cc56f-kn6rz" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.439082 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.469467 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gl7n\" (UniqueName: \"kubernetes.io/projected/26439aaa-817a-44d4-93fd-c634fa23617d-kube-api-access-8gl7n\") pod \"ingress-canary-q9xv9\" (UID: \"26439aaa-817a-44d4-93fd-c634fa23617d\") " pod="openshift-ingress-canary/ingress-canary-q9xv9" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.479565 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.486042 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.486409 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.986271555 +0000 UTC m=+140.016272849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.486727 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.487243 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:28.987201034 +0000 UTC m=+140.017202478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.495898 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q9xv9" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.544465 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.589551 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.589728 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:29.089699959 +0000 UTC m=+140.119701223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.589998 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.590426 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:29.090414932 +0000 UTC m=+140.120416186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.691303 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.691498 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:29.191458119 +0000 UTC m=+140.221459403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.691685 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.692295 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:29.192272146 +0000 UTC m=+140.222273530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.793585 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.794070 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:29.294037417 +0000 UTC m=+140.324038671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.794579 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.795164 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:29.295140632 +0000 UTC m=+140.325142066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.827946 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-config-volume\") pod \"collect-profiles-29412645-fnqc4\" (UID: \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.828207 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/96b20c46-3167-4522-a0cc-91ee6fc88b79-node-bootstrap-token\") pod \"machine-config-server-d22vd\" (UID: \"96b20c46-3167-4522-a0cc-91ee6fc88b79\") " pod="openshift-machine-config-operator/machine-config-server-d22vd" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.829108 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7399790d-a82c-454a-aef1-10bb460bfe73-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.832214 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7399790d-a82c-454a-aef1-10bb460bfe73-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.832462 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-service-ca\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.832757 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7399790d-a82c-454a-aef1-10bb460bfe73-audit-policies\") pod \"apiserver-7bbb656c7d-88rfz\" (UID: \"7399790d-a82c-454a-aef1-10bb460bfe73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.836896 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/96b20c46-3167-4522-a0cc-91ee6fc88b79-certs\") pod \"machine-config-server-d22vd\" (UID: \"96b20c46-3167-4522-a0cc-91ee6fc88b79\") " pod="openshift-machine-config-operator/machine-config-server-d22vd" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.837078 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqshn\" (UniqueName: \"kubernetes.io/projected/7a85add7-3606-4006-8d35-e1dc0fb27ab1-kube-api-access-vqshn\") pod \"kube-storage-version-migrator-operator-b67b599dd-bbtcr\" (UID: \"7a85add7-3606-4006-8d35-e1dc0fb27ab1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.837793 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhj4v\" (UniqueName: \"kubernetes.io/projected/1fd93ab4-e1c0-456f-8741-b50c983c8a89-kube-api-access-qhj4v\") pod \"csi-hostpathplugin-z9c8s\" (UID: \"1fd93ab4-e1c0-456f-8741-b50c983c8a89\") " pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.838752 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5575a3b0-de4c-492e-a3d5-1a95efb8e6ec-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlpqq\" (UID: \"5575a3b0-de4c-492e-a3d5-1a95efb8e6ec\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq" Dec 03 10:55:28 crc kubenswrapper[4756]: W1203 10:55:28.841253 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13717635_b3e8_4e34_b622_a46ef9eee317.slice/crio-294a52020b14abf291aea34e1fae0057ff2a3289380b5cc433fa89ff6fe575b5 WatchSource:0}: Error finding container 294a52020b14abf291aea34e1fae0057ff2a3289380b5cc433fa89ff6fe575b5: Status 404 returned error can't find the container with id 294a52020b14abf291aea34e1fae0057ff2a3289380b5cc433fa89ff6fe575b5 Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.841536 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rm5w\" (UniqueName: \"kubernetes.io/projected/d8272209-b1b5-4dac-88d1-ca60b3f50256-kube-api-access-4rm5w\") pod \"dns-default-4fmqt\" (UID: \"d8272209-b1b5-4dac-88d1-ca60b3f50256\") " pod="openshift-dns/dns-default-4fmqt" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.841978 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" event={"ID":"1541db70-51f2-4236-854e-6ec0f8fa3010","Type":"ContainerStarted","Data":"224215c7da5b93915bfb2a3adc215443ed141b651341533f77dc68b4e4cc1bfd"} Dec 03 10:55:28 crc kubenswrapper[4756]: W1203 10:55:28.842995 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd1add93_d083_4ee9_b1d7_306db9621f6f.slice/crio-a702cb66de52b036309a5e4bef1b6fdb7696807609509084f4d9a08568643f29 WatchSource:0}: Error finding container a702cb66de52b036309a5e4bef1b6fdb7696807609509084f4d9a08568643f29: Status 404 returned error can't find the container with id a702cb66de52b036309a5e4bef1b6fdb7696807609509084f4d9a08568643f29 Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.843454 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8b49366b-8abe-4dde-b64b-fc5d34106174-proxy-tls\") pod \"machine-config-operator-74547568cd-tzhjl\" (UID: \"8b49366b-8abe-4dde-b64b-fc5d34106174\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.843699 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" event={"ID":"f8e68876-7e42-40f7-acc3-4cb527be5e06","Type":"ContainerStarted","Data":"b5ec3636f362a13ecec9e94e02a7d48a9ae15bac7623bdc8379c1904bff7a5d6"} Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.843721 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a85add7-3606-4006-8d35-e1dc0fb27ab1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bbtcr\" (UID: \"7a85add7-3606-4006-8d35-e1dc0fb27ab1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.844607 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d1f1dc07-828b-4d25-ada8-f69bee429206-signing-key\") pod \"service-ca-9c57cc56f-kn6rz\" (UID: \"d1f1dc07-828b-4d25-ada8-f69bee429206\") " pod="openshift-service-ca/service-ca-9c57cc56f-kn6rz" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.846517 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgm68\" (UniqueName: \"kubernetes.io/projected/2237b661-da97-455e-b4ec-9b42fbcf6cc8-kube-api-access-lgm68\") pod \"openshift-config-operator-7777fb866f-nws44\" (UID: \"2237b661-da97-455e-b4ec-9b42fbcf6cc8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.847155 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" event={"ID":"d4fa96d4-7e66-41b2-8073-2fc131612225","Type":"ContainerStarted","Data":"3d057a787ed2ec8823764db988c350e87b6cee55b7563b00771629d6ffde76b6"} Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.848870 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jngt\" (UniqueName: \"kubernetes.io/projected/0e6b1897-6a37-4181-bcb1-876d1205f2ab-kube-api-access-4jngt\") pod \"console-f9d7485db-hf4d2\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.848945 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" event={"ID":"35183c23-2ddd-4984-8ba9-d86765b138ce","Type":"ContainerStarted","Data":"bd29aae9b14a2345d15be6331c93b9d968ff312b002a3939242f6ac3e5be9a25"} Dec 03 10:55:28 crc kubenswrapper[4756]: W1203 10:55:28.851034 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88056822_ddb3_47aa_b15e_f344471f6b0a.slice/crio-17c65e732122e144696c58217d8fac158c58540dbde8d90414323c50ec72a7b3 WatchSource:0}: Error finding container 17c65e732122e144696c58217d8fac158c58540dbde8d90414323c50ec72a7b3: Status 404 returned error can't find the container with id 17c65e732122e144696c58217d8fac158c58540dbde8d90414323c50ec72a7b3 Dec 03 10:55:28 crc kubenswrapper[4756]: W1203 10:55:28.851388 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ca956b0_0bf9_4c47_97fb_24e5141cf2bf.slice/crio-9a9f2d4c8a69fff44105ed2802abfcb7388b74f3dfaac17485dd71ac53f1e0e5 WatchSource:0}: Error finding container 9a9f2d4c8a69fff44105ed2802abfcb7388b74f3dfaac17485dd71ac53f1e0e5: Status 404 returned error can't find the container with id 9a9f2d4c8a69fff44105ed2802abfcb7388b74f3dfaac17485dd71ac53f1e0e5 Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.895983 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.896345 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:29.396327865 +0000 UTC m=+140.426329109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.933975 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.958285 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.970696 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:28 crc kubenswrapper[4756]: I1203 10:55:28.998389 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:28 crc kubenswrapper[4756]: E1203 10:55:28.998919 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:29.498899843 +0000 UTC m=+140.528901087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.023895 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kn6rz" Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.032273 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.036102 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.048090 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.048396 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr" Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.057233 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4fmqt" Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.070562 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-d22vd" Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.089872 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.100473 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:29 crc kubenswrapper[4756]: E1203 10:55:29.101137 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:29.601114348 +0000 UTC m=+140.631115592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.110525 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68"] Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.144601 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb"] Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.202451 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:29 crc kubenswrapper[4756]: E1203 10:55:29.203649 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:29.703631803 +0000 UTC m=+140.733633047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.305134 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:29 crc kubenswrapper[4756]: E1203 10:55:29.305639 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:29.805620212 +0000 UTC m=+140.835621456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.342049 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v"] Dec 03 10:55:29 crc kubenswrapper[4756]: W1203 10:55:29.364830 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c3db407_2cd1_4a7b_9b29_26d688823fa0.slice/crio-605020dc6564114a6bb9886604e5d8ba37a932221fc55747d7c2990a08689383 WatchSource:0}: Error finding container 605020dc6564114a6bb9886604e5d8ba37a932221fc55747d7c2990a08689383: Status 404 returned error can't find the container with id 605020dc6564114a6bb9886604e5d8ba37a932221fc55747d7c2990a08689383 Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.369261 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg"] Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.406919 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:29 crc kubenswrapper[4756]: E1203 10:55:29.407429 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:29.907415755 +0000 UTC m=+140.937416999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:29 crc kubenswrapper[4756]: W1203 10:55:29.453645 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb97fb78_fa90_43e2_b2d6_866bc7bec0a2.slice/crio-54c6a5d17437699819c7441d9cfb88355cf47096a575d95d12f7fc32350215ec WatchSource:0}: Error finding container 54c6a5d17437699819c7441d9cfb88355cf47096a575d95d12f7fc32350215ec: Status 404 returned error can't find the container with id 54c6a5d17437699819c7441d9cfb88355cf47096a575d95d12f7fc32350215ec Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.508670 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:29 crc kubenswrapper[4756]: E1203 10:55:29.509411 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:30.009387182 +0000 UTC m=+141.039388426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.610458 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:29 crc kubenswrapper[4756]: E1203 10:55:29.610841 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:30.110823933 +0000 UTC m=+141.140825177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.711839 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:29 crc kubenswrapper[4756]: E1203 10:55:29.712069 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:30.212028457 +0000 UTC m=+141.242029701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.712582 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:29 crc kubenswrapper[4756]: E1203 10:55:29.712992 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:30.212973467 +0000 UTC m=+141.242974711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.813571 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:29 crc kubenswrapper[4756]: E1203 10:55:29.814073 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:30.314054046 +0000 UTC m=+141.344055290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.890478 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" event={"ID":"012e3e5e-713a-46da-b9fe-60c58d0b8bd1","Type":"ContainerStarted","Data":"75c4a8a64b95c23f89c0181346a74de03c9303428d649e497cd6dca0ccf37767"} Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.896420 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-827dv" event={"ID":"c0e4abd5-fe26-4e86-b669-e1089fc6470f","Type":"ContainerStarted","Data":"97617adb12480bcf4110396bce23ab5181407a888578e3548099a58b9247c7ba"} Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.904092 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" event={"ID":"13717635-b3e8-4e34-b622-a46ef9eee317","Type":"ContainerStarted","Data":"294a52020b14abf291aea34e1fae0057ff2a3289380b5cc433fa89ff6fe575b5"} Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.910963 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg" event={"ID":"7875340d-00b2-44dc-8117-0aed3f12b94d","Type":"ContainerStarted","Data":"97f478d60d38e7a389b6ad8d67cc9bbb22e7c13e63f6baa9244ecd87afc4c09c"} Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.911684 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xndxw" event={"ID":"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf","Type":"ContainerStarted","Data":"9a9f2d4c8a69fff44105ed2802abfcb7388b74f3dfaac17485dd71ac53f1e0e5"} Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.912834 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" event={"ID":"35183c23-2ddd-4984-8ba9-d86765b138ce","Type":"ContainerStarted","Data":"f9cbd6f34e3da9cf09405043aac4874fd1930388f6b7dda00b0f8ff8d5071255"} Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.915204 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:29 crc kubenswrapper[4756]: E1203 10:55:29.915606 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:30.41558546 +0000 UTC m=+141.445586704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.916190 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" event={"ID":"f8e68876-7e42-40f7-acc3-4cb527be5e06","Type":"ContainerStarted","Data":"4d614578e2b88cb32a5acd01898f130cb85015bf20501f82254978e21751d52a"} Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.917201 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.926531 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sfq5w" event={"ID":"fd1add93-d083-4ee9-b1d7-306db9621f6f","Type":"ContainerStarted","Data":"a702cb66de52b036309a5e4bef1b6fdb7696807609509084f4d9a08568643f29"} Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.928886 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-d22vd" event={"ID":"96b20c46-3167-4522-a0cc-91ee6fc88b79","Type":"ContainerStarted","Data":"a0f60cb4e2428923afef1b4edc5752609a6ae680ccc5884b281048951f31ff3a"} Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.932481 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpxd4" event={"ID":"88056822-ddb3-47aa-b15e-f344471f6b0a","Type":"ContainerStarted","Data":"17c65e732122e144696c58217d8fac158c58540dbde8d90414323c50ec72a7b3"} Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.939187 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" event={"ID":"d4fa96d4-7e66-41b2-8073-2fc131612225","Type":"ContainerStarted","Data":"019fde5bb79248a97ea077fd1979e8b3e2d355553cad3e477aa1ea71c63a0980"} Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.963195 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" event={"ID":"eb97fb78-fa90-43e2-b2d6-866bc7bec0a2","Type":"ContainerStarted","Data":"54c6a5d17437699819c7441d9cfb88355cf47096a575d95d12f7fc32350215ec"} Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.965809 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.969345 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb" event={"ID":"9c3db407-2cd1-4a7b-9b29-26d688823fa0","Type":"ContainerStarted","Data":"605020dc6564114a6bb9886604e5d8ba37a932221fc55747d7c2990a08689383"} Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.976354 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" event={"ID":"1541db70-51f2-4236-854e-6ec0f8fa3010","Type":"ContainerStarted","Data":"97872f5bab91b3d39a7083de5cd24107f6e1379b286aa8cbbc3ffc0515c4b5ed"} Dec 03 10:55:29 crc kubenswrapper[4756]: I1203 10:55:29.977224 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.020559 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:30 crc kubenswrapper[4756]: E1203 10:55:30.020842 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:30.520811853 +0000 UTC m=+141.550813097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.021262 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:30 crc kubenswrapper[4756]: E1203 10:55:30.022246 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:30.522230137 +0000 UTC m=+141.552231381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.092248 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" podStartSLOduration=121.092216563 podStartE2EDuration="2m1.092216563s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:30.090704626 +0000 UTC m=+141.120705880" watchObservedRunningTime="2025-12-03 10:55:30.092216563 +0000 UTC m=+141.122217807" Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.122657 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:30 crc kubenswrapper[4756]: E1203 10:55:30.123825 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:30.623800833 +0000 UTC m=+141.653802087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.138095 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" podStartSLOduration=121.138062709 podStartE2EDuration="2m1.138062709s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:30.13343507 +0000 UTC m=+141.163436334" watchObservedRunningTime="2025-12-03 10:55:30.138062709 +0000 UTC m=+141.168063953" Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.226390 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.226577 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:30 crc kubenswrapper[4756]: E1203 10:55:30.228110 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:30.728081005 +0000 UTC m=+141.758082249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.333470 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:30 crc kubenswrapper[4756]: E1203 10:55:30.334013 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:30.833991568 +0000 UTC m=+141.863992812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.371200 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-msbqs"] Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.388083 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk"] Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.394630 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5"] Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.436458 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:30 crc kubenswrapper[4756]: E1203 10:55:30.438711 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:30.938680903 +0000 UTC m=+141.968682147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.441801 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t4rfh"] Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.537746 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:30 crc kubenswrapper[4756]: E1203 10:55:30.537993 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:31.037926154 +0000 UTC m=+142.067927398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.538042 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:30 crc kubenswrapper[4756]: E1203 10:55:30.538459 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:31.038446821 +0000 UTC m=+142.068448065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.638988 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:30 crc kubenswrapper[4756]: E1203 10:55:30.639230 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:31.13919522 +0000 UTC m=+142.169196464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.640416 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:30 crc kubenswrapper[4756]: E1203 10:55:30.640485 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:31.14047261 +0000 UTC m=+142.170473844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:30 crc kubenswrapper[4756]: E1203 10:55:30.674199 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ca956b0_0bf9_4c47_97fb_24e5141cf2bf.slice/crio-805f4cd36bd38c84c8ffa3e9dbe838983f63055a052b511ced5a4e649be58699.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ca956b0_0bf9_4c47_97fb_24e5141cf2bf.slice/crio-conmon-805f4cd36bd38c84c8ffa3e9dbe838983f63055a052b511ced5a4e649be58699.scope\": RecentStats: unable to find data in memory cache]" Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.742729 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:30 crc kubenswrapper[4756]: E1203 10:55:30.743138 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:31.24312179 +0000 UTC m=+142.273123034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.846485 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:30 crc kubenswrapper[4756]: E1203 10:55:30.846980 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:31.346946846 +0000 UTC m=+142.376948090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.851197 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nr6nx"] Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.874171 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7rfwn"] Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.887638 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xkdl4"] Dec 03 10:55:30 crc kubenswrapper[4756]: W1203 10:55:30.895365 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ed2177c_9a5f_40c2_a7a5_559f4444548d.slice/crio-e887044708bd12d8734a3fb9cb99fc7ecf7987aa5a44d664004fcec46a10c1cc WatchSource:0}: Error finding container e887044708bd12d8734a3fb9cb99fc7ecf7987aa5a44d664004fcec46a10c1cc: Status 404 returned error can't find the container with id e887044708bd12d8734a3fb9cb99fc7ecf7987aa5a44d664004fcec46a10c1cc Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.912228 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t"] Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.917400 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tqdqx"] Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.929496 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg"] Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.935078 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr"] Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.936891 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q9xv9"] Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.939795 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k5dmg"] Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.942994 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq"] Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.947130 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:30 crc kubenswrapper[4756]: E1203 10:55:30.947595 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:31.447576012 +0000 UTC m=+142.477577256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:30 crc kubenswrapper[4756]: I1203 10:55:30.991293 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" event={"ID":"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad","Type":"ContainerStarted","Data":"4dcaf59768693bd5908606f47536b2c41d4fa6cd9de27a13514a7720e30aa2b9"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:30.996036 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" event={"ID":"d6cddd35-757a-487a-afb5-d75d73224aee","Type":"ContainerStarted","Data":"4397ec5631f399e2bd421be9641fa5960124d5bc272cd827cb1c630777731080"} Dec 03 10:55:31 crc kubenswrapper[4756]: W1203 10:55:30.997116 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdf32ebd_938f_4584_bc73_b6cac4407864.slice/crio-dfaf044f1430023a44a48cca44138a67af9d13ee5b147b7b379c6bd05923833e WatchSource:0}: Error finding container dfaf044f1430023a44a48cca44138a67af9d13ee5b147b7b379c6bd05923833e: Status 404 returned error can't find the container with id dfaf044f1430023a44a48cca44138a67af9d13ee5b147b7b379c6bd05923833e Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:30.998825 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" event={"ID":"f537a344-95f5-43fc-8ec6-f96cede4f461","Type":"ContainerStarted","Data":"d3c25a976a24f2920664285fcfdef0ae46a5063ac4e54288e8cb0fdd8a5f9f29"} Dec 03 10:55:31 crc kubenswrapper[4756]: W1203 10:55:31.002484 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7d4a407_5935_4005_b569_0a2c596a98c9.slice/crio-fecfb6d601ac2a869d0265b1cd9cccb0691547f5184bb0eb3b1d5a25229d338a WatchSource:0}: Error finding container fecfb6d601ac2a869d0265b1cd9cccb0691547f5184bb0eb3b1d5a25229d338a: Status 404 returned error can't find the container with id fecfb6d601ac2a869d0265b1cd9cccb0691547f5184bb0eb3b1d5a25229d338a Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.002848 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" event={"ID":"012e3e5e-713a-46da-b9fe-60c58d0b8bd1","Type":"ContainerStarted","Data":"10ac3a92c92085addb0a3c2599bbd6854c8d4b29a9553f0a65cbeb6caeeba01d"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.003369 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.005313 4756 generic.go:334] "Generic (PLEG): container finished" podID="5ca956b0-0bf9-4c47-97fb-24e5141cf2bf" containerID="805f4cd36bd38c84c8ffa3e9dbe838983f63055a052b511ced5a4e649be58699" exitCode=0 Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.005384 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xndxw" event={"ID":"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf","Type":"ContainerDied","Data":"805f4cd36bd38c84c8ffa3e9dbe838983f63055a052b511ced5a4e649be58699"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.011909 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpxd4" event={"ID":"88056822-ddb3-47aa-b15e-f344471f6b0a","Type":"ContainerStarted","Data":"d0061e5ad295e3f8d290e680a4f87a5021cda23277be91e41f18903d408858a1"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.012050 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.013774 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tqdqx" event={"ID":"25bd25a7-d2cc-4b0c-969e-2b8cbd95b444","Type":"ContainerStarted","Data":"cd71a4b79d3d884cc7519d41900dd590a431e6e74cb6e9469f007ebae0710d17"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.019704 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-msbqs" event={"ID":"37d4397e-71b7-45f4-9d39-1ad59e4ca98d","Type":"ContainerStarted","Data":"751dfff73bac5e2582300bbac75a4ffa996a2aa3237fcdfc3e7e3c54fbcecbb7"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.029055 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tjg68" podStartSLOduration=122.029036124 podStartE2EDuration="2m2.029036124s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:31.020096729 +0000 UTC m=+142.050097973" watchObservedRunningTime="2025-12-03 10:55:31.029036124 +0000 UTC m=+142.059037368" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.042405 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" event={"ID":"eb97fb78-fa90-43e2-b2d6-866bc7bec0a2","Type":"ContainerStarted","Data":"fd77425d75552ae9f4bced24d490b4eaaaf40ef857d1a02f3f020dc926cb0148"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.042881 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.051226 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rdwph"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.053753 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr" event={"ID":"7a85add7-3606-4006-8d35-e1dc0fb27ab1","Type":"ContainerStarted","Data":"70ae73c97b69d69959b3d5910de769646e44dab6ac626c0c25352f2adbffbb16"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.054157 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:31 crc kubenswrapper[4756]: E1203 10:55:31.055065 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:31.555045995 +0000 UTC m=+142.585047239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.057589 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sfq5w" event={"ID":"fd1add93-d083-4ee9-b1d7-306db9621f6f","Type":"ContainerStarted","Data":"3ae11089b0b029bde26dc76b3cb27be45ff3026d357043127045106fb5eed3fa"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.058674 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.061602 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xkdl4" event={"ID":"618da479-bdd6-48c0-8d67-4d08154c9209","Type":"ContainerStarted","Data":"fcbbdb7f7f1e431067b97263607129947d19e36d02aa35514856b2b0dbd92b8d"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.066167 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-kpxd4" podStartSLOduration=122.066129709 podStartE2EDuration="2m2.066129709s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:31.061386848 +0000 UTC m=+142.091388092" watchObservedRunningTime="2025-12-03 10:55:31.066129709 +0000 UTC m=+142.096130953" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.069058 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" event={"ID":"d4fa96d4-7e66-41b2-8073-2fc131612225","Type":"ContainerStarted","Data":"e8840ea85afd1d59a2c2da11c58703839f408abe3002ebf5694044b7ade95844"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.078160 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q9xv9" event={"ID":"26439aaa-817a-44d4-93fd-c634fa23617d","Type":"ContainerStarted","Data":"6f8b1056c4a3c7d7e1feb23871ebad51f9ed04ff3eb25d5f768f9b06f19edc33"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.085298 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nr6nx" event={"ID":"3ed2177c-9a5f-40c2-a7a5-559f4444548d","Type":"ContainerStarted","Data":"e887044708bd12d8734a3fb9cb99fc7ecf7987aa5a44d664004fcec46a10c1cc"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.091614 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" event={"ID":"c24f39cb-b5ef-45f3-99cc-c30786f9c55c","Type":"ContainerStarted","Data":"b9c2a8e7acb78669565bedd80f0f5210207dfcaa57b48cc5804d787bfb0ec0ee"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.099897 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" event={"ID":"13717635-b3e8-4e34-b622-a46ef9eee317","Type":"ContainerStarted","Data":"d1a7146a0d5d53aca30553a00502cfb26c6ec1288fd1b016bfb1fccc1212a180"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.110942 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" podStartSLOduration=122.11091786 podStartE2EDuration="2m2.11091786s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:31.108613127 +0000 UTC m=+142.138614371" watchObservedRunningTime="2025-12-03 10:55:31.11091786 +0000 UTC m=+142.140919104" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.122188 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pz22v" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.127375 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb" event={"ID":"9c3db407-2cd1-4a7b-9b29-26d688823fa0","Type":"ContainerStarted","Data":"0f0670d0ff1cba828fafcc2ad5e6adc3a696e4d40f10a1715bbb04f116f801c5"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.128988 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-sfq5w" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.152440 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.153558 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5" event={"ID":"a7a49672-689d-4256-bba8-cf088f28e689","Type":"ContainerStarted","Data":"a98c5f91c0d7865467e2c4223994048252021275502e42f70e81798e6566d52a"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.155085 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:31 crc kubenswrapper[4756]: E1203 10:55:31.156022 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:31.655999861 +0000 UTC m=+142.686001105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.158965 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kn6rz"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.159020 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg" event={"ID":"7875340d-00b2-44dc-8117-0aed3f12b94d","Type":"ContainerStarted","Data":"2b8904d77b41a06448264d18dd0d2211684f97c48002632e587a752189cea3f2"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.163772 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.167914 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hf4d2"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.174466 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-z9c8s"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.174456 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-sfq5w" podStartSLOduration=122.17442409 podStartE2EDuration="2m2.17442409s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:31.169674468 +0000 UTC m=+142.199675712" watchObservedRunningTime="2025-12-03 10:55:31.17442409 +0000 UTC m=+142.204425334" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.181887 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-827dv" event={"ID":"c0e4abd5-fe26-4e86-b669-e1089fc6470f","Type":"ContainerStarted","Data":"3194962b9f332bee8c2858a8b847f55eea5fee11c5d3d5a2a0792073852ea508"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.199743 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bvv8r" podStartSLOduration=122.199719387 podStartE2EDuration="2m2.199719387s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:31.193577542 +0000 UTC m=+142.223578786" watchObservedRunningTime="2025-12-03 10:55:31.199719387 +0000 UTC m=+142.229720631" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.201083 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-d22vd" event={"ID":"96b20c46-3167-4522-a0cc-91ee6fc88b79","Type":"ContainerStarted","Data":"5fd2a2e5633e84d19a439afdbbb96b1d980f1dd0092b33500376ad4d281665bf"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.213797 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" event={"ID":"35183c23-2ddd-4984-8ba9-d86765b138ce","Type":"ContainerStarted","Data":"d7731f27bdb8929448324aae7c88da6d7c5b429a3860da9539ef60570b4d6799"} Dec 03 10:55:31 crc kubenswrapper[4756]: W1203 10:55:31.215411 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1f1dc07_828b_4d25_ada8_f69bee429206.slice/crio-887163bff84a4e7d5b91a54ce9ef9251f34aff72863047373d6613c07e100961 WatchSource:0}: Error finding container 887163bff84a4e7d5b91a54ce9ef9251f34aff72863047373d6613c07e100961: Status 404 returned error can't find the container with id 887163bff84a4e7d5b91a54ce9ef9251f34aff72863047373d6613c07e100961 Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.219340 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg" event={"ID":"52898585-7090-4ddc-b022-c439b71e241f","Type":"ContainerStarted","Data":"4e4c95497bc2ba69ca2cffaed9d29698dde4b2b9365db94de8b1a62fe7c6678c"} Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.257539 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:31 crc kubenswrapper[4756]: E1203 10:55:31.262298 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:31.762274396 +0000 UTC m=+142.792275830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.284272 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4fmqt"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.284670 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5sxx7" podStartSLOduration=122.284657111 podStartE2EDuration="2m2.284657111s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:31.274208248 +0000 UTC m=+142.304209492" watchObservedRunningTime="2025-12-03 10:55:31.284657111 +0000 UTC m=+142.314658355" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.313045 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2z4sg" podStartSLOduration=122.313022588 podStartE2EDuration="2m2.313022588s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:31.311452248 +0000 UTC m=+142.341453492" watchObservedRunningTime="2025-12-03 10:55:31.313022588 +0000 UTC m=+142.343023822" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.321573 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.333143 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.339758 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nws44"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.341258 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.344056 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.349335 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.361026 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:31 crc kubenswrapper[4756]: E1203 10:55:31.361462 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:31.861444035 +0000 UTC m=+142.891445269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.444302 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-827dv" podStartSLOduration=122.444275331 podStartE2EDuration="2m2.444275331s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:31.36065011 +0000 UTC m=+142.390651354" watchObservedRunningTime="2025-12-03 10:55:31.444275331 +0000 UTC m=+142.474276575" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.465397 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:31 crc kubenswrapper[4756]: E1203 10:55:31.466356 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:31.966335806 +0000 UTC m=+142.996337050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.553256 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-d22vd" podStartSLOduration=6.553234722 podStartE2EDuration="6.553234722s" podCreationTimestamp="2025-12-03 10:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:31.504764384 +0000 UTC m=+142.534765638" watchObservedRunningTime="2025-12-03 10:55:31.553234722 +0000 UTC m=+142.583235956" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.567096 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:31 crc kubenswrapper[4756]: E1203 10:55:31.567713 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:32.067691004 +0000 UTC m=+143.097692248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.668824 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:31 crc kubenswrapper[4756]: E1203 10:55:31.669306 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:32.16928119 +0000 UTC m=+143.199282434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.776337 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:31 crc kubenswrapper[4756]: E1203 10:55:31.776534 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:32.276501585 +0000 UTC m=+143.306502829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.776675 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:31 crc kubenswrapper[4756]: E1203 10:55:31.777181 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:32.277163337 +0000 UTC m=+143.307164581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.817753 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.853070 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-gjjx2" podStartSLOduration=122.853046311 podStartE2EDuration="2m2.853046311s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:31.554326618 +0000 UTC m=+142.584327862" watchObservedRunningTime="2025-12-03 10:55:31.853046311 +0000 UTC m=+142.883047545" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.853405 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2cjlg"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.860369 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.870439 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2cjlg"] Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.878161 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:31 crc kubenswrapper[4756]: E1203 10:55:31.878612 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:32.378592878 +0000 UTC m=+143.408594122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.878936 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.917019 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:31 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:31 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:31 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.917106 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.980229 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c971a632-2e34-4783-bb0a-9e516fb8bdbd-utilities\") pod \"certified-operators-2cjlg\" (UID: \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\") " pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.980276 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdh86\" (UniqueName: \"kubernetes.io/projected/c971a632-2e34-4783-bb0a-9e516fb8bdbd-kube-api-access-hdh86\") pod \"certified-operators-2cjlg\" (UID: \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\") " pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.980354 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:31 crc kubenswrapper[4756]: I1203 10:55:31.980452 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c971a632-2e34-4783-bb0a-9e516fb8bdbd-catalog-content\") pod \"certified-operators-2cjlg\" (UID: \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\") " pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:55:31 crc kubenswrapper[4756]: E1203 10:55:31.980865 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:32.480850264 +0000 UTC m=+143.510851508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.056912 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-972dg"] Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.058280 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-972dg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.061109 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.075998 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-972dg"] Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.081002 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:32 crc kubenswrapper[4756]: E1203 10:55:32.081262 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:32.581231372 +0000 UTC m=+143.611232616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.081459 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c971a632-2e34-4783-bb0a-9e516fb8bdbd-catalog-content\") pod \"certified-operators-2cjlg\" (UID: \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\") " pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.081607 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c971a632-2e34-4783-bb0a-9e516fb8bdbd-utilities\") pod \"certified-operators-2cjlg\" (UID: \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\") " pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.081648 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdh86\" (UniqueName: \"kubernetes.io/projected/c971a632-2e34-4783-bb0a-9e516fb8bdbd-kube-api-access-hdh86\") pod \"certified-operators-2cjlg\" (UID: \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\") " pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.081775 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.082664 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c971a632-2e34-4783-bb0a-9e516fb8bdbd-utilities\") pod \"certified-operators-2cjlg\" (UID: \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\") " pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:55:32 crc kubenswrapper[4756]: E1203 10:55:32.082857 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:32.582847433 +0000 UTC m=+143.612848757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.083055 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c971a632-2e34-4783-bb0a-9e516fb8bdbd-catalog-content\") pod \"certified-operators-2cjlg\" (UID: \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\") " pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.116763 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdh86\" (UniqueName: \"kubernetes.io/projected/c971a632-2e34-4783-bb0a-9e516fb8bdbd-kube-api-access-hdh86\") pod \"certified-operators-2cjlg\" (UID: \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\") " pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.183174 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.183469 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4514e32-bd45-46cf-b849-42c2d941f777-catalog-content\") pod \"community-operators-972dg\" (UID: \"a4514e32-bd45-46cf-b849-42c2d941f777\") " pod="openshift-marketplace/community-operators-972dg" Dec 03 10:55:32 crc kubenswrapper[4756]: E1203 10:55:32.183564 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:32.683501349 +0000 UTC m=+143.713502593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.183700 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4514e32-bd45-46cf-b849-42c2d941f777-utilities\") pod \"community-operators-972dg\" (UID: \"a4514e32-bd45-46cf-b849-42c2d941f777\") " pod="openshift-marketplace/community-operators-972dg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.183942 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpw2p\" (UniqueName: \"kubernetes.io/projected/a4514e32-bd45-46cf-b849-42c2d941f777-kube-api-access-mpw2p\") pod \"community-operators-972dg\" (UID: \"a4514e32-bd45-46cf-b849-42c2d941f777\") " pod="openshift-marketplace/community-operators-972dg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.184165 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:32 crc kubenswrapper[4756]: E1203 10:55:32.184721 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:32.684709408 +0000 UTC m=+143.714710652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.227106 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4fmqt" event={"ID":"d8272209-b1b5-4dac-88d1-ca60b3f50256","Type":"ContainerStarted","Data":"4e8ae8cc140e194810d1933e34469cc6c02958bbe5ef19e460bff859e3f051c7"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.228071 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rdwph" event={"ID":"8d80e0dc-bc00-4d25-8fbd-4eeb2976a9ba","Type":"ContainerStarted","Data":"edd9bc7872a258710a36f015556e659097fb113a691d6b1b7b7e2dacdb461a23"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.229567 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" event={"ID":"7399790d-a82c-454a-aef1-10bb460bfe73","Type":"ContainerStarted","Data":"9b1293aa4e3201a00bcee389db650290dfca38046175550fab608e07f0b63050"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.230977 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hf4d2" event={"ID":"0e6b1897-6a37-4181-bcb1-876d1205f2ab","Type":"ContainerStarted","Data":"dddbe53dfdcb1427b08dd5f467a9b73082d0f9cc7df722db13d23bfe7054e845"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.232221 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kn6rz" event={"ID":"d1f1dc07-828b-4d25-ada8-f69bee429206","Type":"ContainerStarted","Data":"887163bff84a4e7d5b91a54ce9ef9251f34aff72863047373d6613c07e100961"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.233525 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" event={"ID":"2237b661-da97-455e-b4ec-9b42fbcf6cc8","Type":"ContainerStarted","Data":"1bdf3852201654f6855d79aed618c771a6610a9bc410136d1b5fd4121bca6e73"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.236112 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" event={"ID":"f537a344-95f5-43fc-8ec6-f96cede4f461","Type":"ContainerStarted","Data":"74bc089824a95f374bebaafc4ff0ab6b2da3e75f8837dd94b9d8a054b56cf2f7"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.238070 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq" event={"ID":"5575a3b0-de4c-492e-a3d5-1a95efb8e6ec","Type":"ContainerStarted","Data":"7a0f081510991995d4bce20a01282a8ac05b9bb345b2536d51a2d31f9a27d841"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.239287 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7" event={"ID":"5908ba24-74aa-4480-a53e-6cb7604d168d","Type":"ContainerStarted","Data":"cd177775849a5a7e1071d4bcd95831f79a33e54f5c47d26a5051e33c82dbcb99"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.241082 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" event={"ID":"8b49366b-8abe-4dde-b64b-fc5d34106174","Type":"ContainerStarted","Data":"c0170c6356743257fe9211cb24ca97afe4aca61618b4c96cb3b75554a39796b0"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.253195 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" event={"ID":"1fd93ab4-e1c0-456f-8741-b50c983c8a89","Type":"ContainerStarted","Data":"108a4fc390731f0aaded6211f59b85222cc01f25df8e26f87c34cbc350af8e7f"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.259723 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-82dpl"] Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.264552 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.264666 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" event={"ID":"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d","Type":"ContainerStarted","Data":"e7590e079a75462baa089214ed70cc660ad37b1e2314ed89c5afe457750cc1dc"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.271864 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nr6nx" event={"ID":"3ed2177c-9a5f-40c2-a7a5-559f4444548d","Type":"ContainerStarted","Data":"25f64cadffee052957a0eba334fc018e89ff4c293257dd057a9bbf48f1e54677"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.273898 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-82dpl"] Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.290527 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" event={"ID":"bdf32ebd-938f-4584-bc73-b6cac4407864","Type":"ContainerStarted","Data":"dfaf044f1430023a44a48cca44138a67af9d13ee5b147b7b379c6bd05923833e"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.290970 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.291321 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:32 crc kubenswrapper[4756]: E1203 10:55:32.291650 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:32.791607553 +0000 UTC m=+143.821608797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.291822 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4514e32-bd45-46cf-b849-42c2d941f777-catalog-content\") pod \"community-operators-972dg\" (UID: \"a4514e32-bd45-46cf-b849-42c2d941f777\") " pod="openshift-marketplace/community-operators-972dg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.291880 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4514e32-bd45-46cf-b849-42c2d941f777-utilities\") pod \"community-operators-972dg\" (UID: \"a4514e32-bd45-46cf-b849-42c2d941f777\") " pod="openshift-marketplace/community-operators-972dg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.291929 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpw2p\" (UniqueName: \"kubernetes.io/projected/a4514e32-bd45-46cf-b849-42c2d941f777-kube-api-access-mpw2p\") pod \"community-operators-972dg\" (UID: \"a4514e32-bd45-46cf-b849-42c2d941f777\") " pod="openshift-marketplace/community-operators-972dg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.292010 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:32 crc kubenswrapper[4756]: E1203 10:55:32.292433 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:32.792414639 +0000 UTC m=+143.822415883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.292642 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4514e32-bd45-46cf-b849-42c2d941f777-utilities\") pod \"community-operators-972dg\" (UID: \"a4514e32-bd45-46cf-b849-42c2d941f777\") " pod="openshift-marketplace/community-operators-972dg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.292666 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4514e32-bd45-46cf-b849-42c2d941f777-catalog-content\") pod \"community-operators-972dg\" (UID: \"a4514e32-bd45-46cf-b849-42c2d941f777\") " pod="openshift-marketplace/community-operators-972dg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.294977 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" event={"ID":"6ce33fc3-643b-4347-aa7f-31fcfe461b1d","Type":"ContainerStarted","Data":"80323519e4d8c941aac41daae54586d01de0aa3911d5519c9067e7b1c43ca5e6"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.304023 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-msbqs" event={"ID":"37d4397e-71b7-45f4-9d39-1ad59e4ca98d","Type":"ContainerStarted","Data":"3e3a9a85d8b224b5c369642a3d9b34d807c45d706e6242a918858f9f92571657"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.316102 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpw2p\" (UniqueName: \"kubernetes.io/projected/a4514e32-bd45-46cf-b849-42c2d941f777-kube-api-access-mpw2p\") pod \"community-operators-972dg\" (UID: \"a4514e32-bd45-46cf-b849-42c2d941f777\") " pod="openshift-marketplace/community-operators-972dg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.325530 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" event={"ID":"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad","Type":"ContainerStarted","Data":"ddf93b80ff0fa1ac18fbea0208b1ebf67ce3ea9d1005ba1a6d03eea5727dc312"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.329817 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb" event={"ID":"9c3db407-2cd1-4a7b-9b29-26d688823fa0","Type":"ContainerStarted","Data":"daf428c3865e8cb52cebce55553ab2aea8b30fd6ecd1e91ee9c393274c0ebfea"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.332682 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" event={"ID":"d7d4a407-5935-4005-b569-0a2c596a98c9","Type":"ContainerStarted","Data":"fecfb6d601ac2a869d0265b1cd9cccb0691547f5184bb0eb3b1d5a25229d338a"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.335867 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5" event={"ID":"a7a49672-689d-4256-bba8-cf088f28e689","Type":"ContainerStarted","Data":"c2e9c2539b3a2b4bddfde37f8fdd8ed7aa6cf2644a0b1886e1fe317309138825"} Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.392577 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-972dg" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.393020 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.393580 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b0d395-c4ca-4956-9f3e-134814838598-catalog-content\") pod \"certified-operators-82dpl\" (UID: \"35b0d395-c4ca-4956-9f3e-134814838598\") " pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.393714 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sschx\" (UniqueName: \"kubernetes.io/projected/35b0d395-c4ca-4956-9f3e-134814838598-kube-api-access-sschx\") pod \"certified-operators-82dpl\" (UID: \"35b0d395-c4ca-4956-9f3e-134814838598\") " pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.393755 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b0d395-c4ca-4956-9f3e-134814838598-utilities\") pod \"certified-operators-82dpl\" (UID: \"35b0d395-c4ca-4956-9f3e-134814838598\") " pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:55:32 crc kubenswrapper[4756]: E1203 10:55:32.394880 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:32.894823021 +0000 UTC m=+143.924824425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.454365 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l688d"] Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.456257 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l688d" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.467781 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l688d"] Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.495344 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b0d395-c4ca-4956-9f3e-134814838598-catalog-content\") pod \"certified-operators-82dpl\" (UID: \"35b0d395-c4ca-4956-9f3e-134814838598\") " pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.495399 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.495435 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sschx\" (UniqueName: \"kubernetes.io/projected/35b0d395-c4ca-4956-9f3e-134814838598-kube-api-access-sschx\") pod \"certified-operators-82dpl\" (UID: \"35b0d395-c4ca-4956-9f3e-134814838598\") " pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.495466 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b0d395-c4ca-4956-9f3e-134814838598-utilities\") pod \"certified-operators-82dpl\" (UID: \"35b0d395-c4ca-4956-9f3e-134814838598\") " pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.498052 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b0d395-c4ca-4956-9f3e-134814838598-catalog-content\") pod \"certified-operators-82dpl\" (UID: \"35b0d395-c4ca-4956-9f3e-134814838598\") " pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:55:32 crc kubenswrapper[4756]: E1203 10:55:32.498435 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:32.998416751 +0000 UTC m=+144.028418005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.499650 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b0d395-c4ca-4956-9f3e-134814838598-utilities\") pod \"certified-operators-82dpl\" (UID: \"35b0d395-c4ca-4956-9f3e-134814838598\") " pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.524063 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sschx\" (UniqueName: \"kubernetes.io/projected/35b0d395-c4ca-4956-9f3e-134814838598-kube-api-access-sschx\") pod \"certified-operators-82dpl\" (UID: \"35b0d395-c4ca-4956-9f3e-134814838598\") " pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.596773 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:32 crc kubenswrapper[4756]: E1203 10:55:32.596977 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:33.096933218 +0000 UTC m=+144.126934462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.597212 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c85576-c4b8-4c93-8ce0-de9654286cff-utilities\") pod \"community-operators-l688d\" (UID: \"89c85576-c4b8-4c93-8ce0-de9654286cff\") " pod="openshift-marketplace/community-operators-l688d" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.597323 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzw5q\" (UniqueName: \"kubernetes.io/projected/89c85576-c4b8-4c93-8ce0-de9654286cff-kube-api-access-vzw5q\") pod \"community-operators-l688d\" (UID: \"89c85576-c4b8-4c93-8ce0-de9654286cff\") " pod="openshift-marketplace/community-operators-l688d" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.597384 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c85576-c4b8-4c93-8ce0-de9654286cff-catalog-content\") pod \"community-operators-l688d\" (UID: \"89c85576-c4b8-4c93-8ce0-de9654286cff\") " pod="openshift-marketplace/community-operators-l688d" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.597589 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:32 crc kubenswrapper[4756]: E1203 10:55:32.599003 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:33.098988214 +0000 UTC m=+144.128989458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.600837 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.649741 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2cjlg"] Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.700751 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.701150 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c85576-c4b8-4c93-8ce0-de9654286cff-utilities\") pod \"community-operators-l688d\" (UID: \"89c85576-c4b8-4c93-8ce0-de9654286cff\") " pod="openshift-marketplace/community-operators-l688d" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.701222 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzw5q\" (UniqueName: \"kubernetes.io/projected/89c85576-c4b8-4c93-8ce0-de9654286cff-kube-api-access-vzw5q\") pod \"community-operators-l688d\" (UID: \"89c85576-c4b8-4c93-8ce0-de9654286cff\") " pod="openshift-marketplace/community-operators-l688d" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.701246 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c85576-c4b8-4c93-8ce0-de9654286cff-catalog-content\") pod \"community-operators-l688d\" (UID: \"89c85576-c4b8-4c93-8ce0-de9654286cff\") " pod="openshift-marketplace/community-operators-l688d" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.701792 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c85576-c4b8-4c93-8ce0-de9654286cff-catalog-content\") pod \"community-operators-l688d\" (UID: \"89c85576-c4b8-4c93-8ce0-de9654286cff\") " pod="openshift-marketplace/community-operators-l688d" Dec 03 10:55:32 crc kubenswrapper[4756]: E1203 10:55:32.701890 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:33.201868111 +0000 UTC m=+144.231869355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.702142 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c85576-c4b8-4c93-8ce0-de9654286cff-utilities\") pod \"community-operators-l688d\" (UID: \"89c85576-c4b8-4c93-8ce0-de9654286cff\") " pod="openshift-marketplace/community-operators-l688d" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.742101 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzw5q\" (UniqueName: \"kubernetes.io/projected/89c85576-c4b8-4c93-8ce0-de9654286cff-kube-api-access-vzw5q\") pod \"community-operators-l688d\" (UID: \"89c85576-c4b8-4c93-8ce0-de9654286cff\") " pod="openshift-marketplace/community-operators-l688d" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.743728 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-972dg"] Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.777392 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l688d" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.813286 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:32 crc kubenswrapper[4756]: E1203 10:55:32.813744 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:33.313723695 +0000 UTC m=+144.343724939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.842221 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:32 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:32 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:32 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.842303 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.917046 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:32 crc kubenswrapper[4756]: E1203 10:55:32.917532 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:33.41749555 +0000 UTC m=+144.447496794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:32 crc kubenswrapper[4756]: I1203 10:55:32.917776 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:32 crc kubenswrapper[4756]: E1203 10:55:32.918266 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:33.418248994 +0000 UTC m=+144.448250238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.020386 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:33 crc kubenswrapper[4756]: E1203 10:55:33.021545 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:33.521518024 +0000 UTC m=+144.551519258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.077349 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-82dpl"] Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.136788 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:33 crc kubenswrapper[4756]: E1203 10:55:33.137702 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:33.637684525 +0000 UTC m=+144.667685769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.245408 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:33 crc kubenswrapper[4756]: E1203 10:55:33.245648 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:33.745604103 +0000 UTC m=+144.775605357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.245824 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:33 crc kubenswrapper[4756]: E1203 10:55:33.246232 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:33.746221112 +0000 UTC m=+144.776222356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.347842 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:33 crc kubenswrapper[4756]: E1203 10:55:33.348374 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:33.848350666 +0000 UTC m=+144.878351910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.424796 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr" event={"ID":"7a85add7-3606-4006-8d35-e1dc0fb27ab1","Type":"ContainerStarted","Data":"a459693eb7d375507b334a1b11f7ff5516ccd3a0538a6207fa1cf9bfd5c571a1"} Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.426343 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l688d"] Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.455524 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.455635 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xkdl4" event={"ID":"618da479-bdd6-48c0-8d67-4d08154c9209","Type":"ContainerStarted","Data":"b940e9f2de9f51d1e4cc99b807052375c68a03378d57e3e8e3de111d863d5f22"} Dec 03 10:55:33 crc kubenswrapper[4756]: E1203 10:55:33.456211 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:33.956195911 +0000 UTC m=+144.986197155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.462924 4756 generic.go:334] "Generic (PLEG): container finished" podID="7399790d-a82c-454a-aef1-10bb460bfe73" containerID="3c222226671dc50eefe3cf579aa939d46cb49eca8699d7ee6d24a40b8f263c44" exitCode=0 Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.463173 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" event={"ID":"7399790d-a82c-454a-aef1-10bb460bfe73","Type":"ContainerDied","Data":"3c222226671dc50eefe3cf579aa939d46cb49eca8699d7ee6d24a40b8f263c44"} Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.466852 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" event={"ID":"8b49366b-8abe-4dde-b64b-fc5d34106174","Type":"ContainerStarted","Data":"c58cde0e4755f5253fc835f7353e1a7b7dbc4a66f243280d9445c690ed02a2fe"} Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.535518 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q9xv9" event={"ID":"26439aaa-817a-44d4-93fd-c634fa23617d","Type":"ContainerStarted","Data":"58693b7cb8432265baa3bc696e80ceb342fd13fb0a0f15f3eb75ab899174fd60"} Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.539704 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bbtcr" podStartSLOduration=124.539665178 podStartE2EDuration="2m4.539665178s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:33.473877886 +0000 UTC m=+144.503879130" watchObservedRunningTime="2025-12-03 10:55:33.539665178 +0000 UTC m=+144.569666422" Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.561916 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" event={"ID":"c24f39cb-b5ef-45f3-99cc-c30786f9c55c","Type":"ContainerStarted","Data":"d927922125bb560cf94069a86b6b265dcd453e51bb5b5863af5dc9da2de57776"} Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.583049 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:33 crc kubenswrapper[4756]: E1203 10:55:33.599898 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:34.099872402 +0000 UTC m=+145.129873646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.618818 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-q9xv9" podStartSLOduration=8.618790686 podStartE2EDuration="8.618790686s" podCreationTimestamp="2025-12-03 10:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:33.590239324 +0000 UTC m=+144.620240568" watchObservedRunningTime="2025-12-03 10:55:33.618790686 +0000 UTC m=+144.648791930" Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.668711 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.669206 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" event={"ID":"d6cddd35-757a-487a-afb5-d75d73224aee","Type":"ContainerStarted","Data":"2e4a02a3ec77872668d7c1ba4c8fd250926b509a8cf30093f1510434e7cb8534"} Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.669235 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" event={"ID":"1fd93ab4-e1c0-456f-8741-b50c983c8a89","Type":"ContainerStarted","Data":"4f6c9c9b356323a4b2eee15436c5d417572bfc28901d6718a9278786b732ce61"} Dec 03 10:55:33 crc kubenswrapper[4756]: W1203 10:55:33.677357 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89c85576_c4b8_4c93_8ce0_de9654286cff.slice/crio-09ee05cc79a263d7b91339430052c6c9a00be12e008ba4ad2a5e1237e1daeb78 WatchSource:0}: Error finding container 09ee05cc79a263d7b91339430052c6c9a00be12e008ba4ad2a5e1237e1daeb78: Status 404 returned error can't find the container with id 09ee05cc79a263d7b91339430052c6c9a00be12e008ba4ad2a5e1237e1daeb78 Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.692968 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.712477 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" event={"ID":"d7d4a407-5935-4005-b569-0a2c596a98c9","Type":"ContainerStarted","Data":"fe7abba3e0e42b2334e45f022d4341775c8d50f86029815e0bebd8dcd55591b1"} Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.712642 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" podStartSLOduration=124.712622754 podStartE2EDuration="2m4.712622754s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:33.711721125 +0000 UTC m=+144.741722369" watchObservedRunningTime="2025-12-03 10:55:33.712622754 +0000 UTC m=+144.742623998" Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.712905 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-k5dmg" podStartSLOduration=124.712901103 podStartE2EDuration="2m4.712901103s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:33.667165902 +0000 UTC m=+144.697167146" watchObservedRunningTime="2025-12-03 10:55:33.712901103 +0000 UTC m=+144.742902347" Dec 03 10:55:33 crc kubenswrapper[4756]: E1203 10:55:33.713936 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:34.213918625 +0000 UTC m=+145.243919869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.714113 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.714690 4756 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7rfwn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.714717 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" podUID="d6cddd35-757a-487a-afb5-d75d73224aee" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.726706 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kn6rz" event={"ID":"d1f1dc07-828b-4d25-ada8-f69bee429206","Type":"ContainerStarted","Data":"cd9f932bcb768f6fa7d815818c94cb593eddb34905ef4641b1e04f887ed13df1"} Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.767782 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" podStartSLOduration=124.767739325 podStartE2EDuration="2m4.767739325s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:33.759222613 +0000 UTC m=+144.789223857" watchObservedRunningTime="2025-12-03 10:55:33.767739325 +0000 UTC m=+144.797740569" Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.770093 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq" event={"ID":"5575a3b0-de4c-492e-a3d5-1a95efb8e6ec","Type":"ContainerStarted","Data":"e1f5a14124f8c243d5b5f198376bc5bb1fee50edad8050f6a52fb6ff1564bb20"} Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.793867 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-kn6rz" podStartSLOduration=124.793848359 podStartE2EDuration="2m4.793848359s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:33.792501756 +0000 UTC m=+144.822503000" watchObservedRunningTime="2025-12-03 10:55:33.793848359 +0000 UTC m=+144.823849603" Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.796019 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:33 crc kubenswrapper[4756]: E1203 10:55:33.797751 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:34.297722323 +0000 UTC m=+145.327723567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.826407 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlpqq" podStartSLOduration=124.826361438 podStartE2EDuration="2m4.826361438s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:33.814996705 +0000 UTC m=+144.844997949" watchObservedRunningTime="2025-12-03 10:55:33.826361438 +0000 UTC m=+144.856362682" Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.838476 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:33 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:33 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:33 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.838552 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.845910 4756 generic.go:334] "Generic (PLEG): container finished" podID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" containerID="c09ea5a36d2260eb3394c34a67cb23cddf31963f1ce2926ef2f2428ec27b8ccd" exitCode=0 Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.845990 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2cjlg" event={"ID":"c971a632-2e34-4783-bb0a-9e516fb8bdbd","Type":"ContainerDied","Data":"c09ea5a36d2260eb3394c34a67cb23cddf31963f1ce2926ef2f2428ec27b8ccd"} Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.846019 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2cjlg" event={"ID":"c971a632-2e34-4783-bb0a-9e516fb8bdbd","Type":"ContainerStarted","Data":"804ef21924ce6c9a190776d3102aebb2eecd3a13035b352369d8f4a760763e7a"} Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.894945 4756 generic.go:334] "Generic (PLEG): container finished" podID="2237b661-da97-455e-b4ec-9b42fbcf6cc8" containerID="516a5f5e2f19fbfff888e57ffd5fa383b26effd9bf173c4f9792b031f779e106" exitCode=0 Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.895095 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" event={"ID":"2237b661-da97-455e-b4ec-9b42fbcf6cc8","Type":"ContainerDied","Data":"516a5f5e2f19fbfff888e57ffd5fa383b26effd9bf173c4f9792b031f779e106"} Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.897380 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:33 crc kubenswrapper[4756]: E1203 10:55:33.899644 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:34.399629039 +0000 UTC m=+145.429630283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.921039 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4fmqt" event={"ID":"d8272209-b1b5-4dac-88d1-ca60b3f50256","Type":"ContainerStarted","Data":"3709e530e70f37aa9999f0c80c850778fb414b08140272f983d0c33d9bf78247"} Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.951559 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rdwph" event={"ID":"8d80e0dc-bc00-4d25-8fbd-4eeb2976a9ba","Type":"ContainerStarted","Data":"01e27aee4ac802dacf3ddd960506f17686fa4d546bfe2d7ff674e9951cda37fb"} Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.965376 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 10:55:33 crc kubenswrapper[4756]: I1203 10:55:33.987753 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" event={"ID":"6ce33fc3-643b-4347-aa7f-31fcfe461b1d","Type":"ContainerStarted","Data":"ef9c857ca5a9d3dd2f7c6ca6fa9cd5d612d9bfd31a00a6e81842ba22662ebe8e"} Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.000841 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:34 crc kubenswrapper[4756]: E1203 10:55:34.003040 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:34.503019482 +0000 UTC m=+145.533020726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.023198 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tqdqx" event={"ID":"25bd25a7-d2cc-4b0c-969e-2b8cbd95b444","Type":"ContainerStarted","Data":"9bf6139ec107aea0a0d7f56013a5fb226d7e3434fd649f81617d2262cc2851f0"} Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.049334 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hf4d2" event={"ID":"0e6b1897-6a37-4181-bcb1-876d1205f2ab","Type":"ContainerStarted","Data":"bed06d8e00fade33094249178a5628458c11aa8e6c979427a8dd2a36576abc6c"} Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.078322 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" event={"ID":"f537a344-95f5-43fc-8ec6-f96cede4f461","Type":"ContainerStarted","Data":"0cb253c4b40437b455d3e5bc21cc326829c5a309bbef478ea969224bdb297e73"} Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.105773 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:34 crc kubenswrapper[4756]: E1203 10:55:34.106113 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:34.606098555 +0000 UTC m=+145.636099799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.153607 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rdwph" podStartSLOduration=125.153578872 podStartE2EDuration="2m5.153578872s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:34.102186951 +0000 UTC m=+145.132188195" watchObservedRunningTime="2025-12-03 10:55:34.153578872 +0000 UTC m=+145.183580116" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.155098 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5r5m9"] Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.156483 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.198149 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.200655 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xndxw" event={"ID":"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf","Type":"ContainerStarted","Data":"0e12d7f6a1669c348cd6e2c2cebd053b7d9ea1133af3ad74cccb5b45f1f3e13c"} Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.207810 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:34 crc kubenswrapper[4756]: E1203 10:55:34.209294 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:34.709274311 +0000 UTC m=+145.739275555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.219777 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5r5m9"] Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.255775 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" event={"ID":"bdf32ebd-938f-4584-bc73-b6cac4407864","Type":"ContainerStarted","Data":"5b27e32901ac85c2f2b60d50f44b7043b6d1d8b5f666428892c4020b406483c7"} Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.305757 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj" event={"ID":"2a0fc109-2cad-4fb3-a5ed-16c590828ed3","Type":"ContainerStarted","Data":"0caac831b90daaf895595ea3a5bd00f0814bada782cbaf3ae2917183e3f9f7dc"} Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.306060 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj" event={"ID":"2a0fc109-2cad-4fb3-a5ed-16c590828ed3","Type":"ContainerStarted","Data":"99332132b9ca1e1347e5244106a17c2fd0960fc0f1b114dc95a4680ad10b65f4"} Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.322312 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h26n9\" (UniqueName: \"kubernetes.io/projected/361ccc2b-5e39-4a12-ae52-3926f47f097d-kube-api-access-h26n9\") pod \"redhat-marketplace-5r5m9\" (UID: \"361ccc2b-5e39-4a12-ae52-3926f47f097d\") " pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.322380 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/361ccc2b-5e39-4a12-ae52-3926f47f097d-utilities\") pod \"redhat-marketplace-5r5m9\" (UID: \"361ccc2b-5e39-4a12-ae52-3926f47f097d\") " pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.322402 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/361ccc2b-5e39-4a12-ae52-3926f47f097d-catalog-content\") pod \"redhat-marketplace-5r5m9\" (UID: \"361ccc2b-5e39-4a12-ae52-3926f47f097d\") " pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.322446 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:34 crc kubenswrapper[4756]: E1203 10:55:34.322862 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:34.822848061 +0000 UTC m=+145.852849305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.381377 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7" event={"ID":"5908ba24-74aa-4480-a53e-6cb7604d168d","Type":"ContainerStarted","Data":"5222f2d10aee251e4a5e076630ccfa668c2e22f996d8be803abfcec615e6ae92"} Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.423733 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.424116 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/361ccc2b-5e39-4a12-ae52-3926f47f097d-utilities\") pod \"redhat-marketplace-5r5m9\" (UID: \"361ccc2b-5e39-4a12-ae52-3926f47f097d\") " pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.424153 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/361ccc2b-5e39-4a12-ae52-3926f47f097d-catalog-content\") pod \"redhat-marketplace-5r5m9\" (UID: \"361ccc2b-5e39-4a12-ae52-3926f47f097d\") " pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.424271 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h26n9\" (UniqueName: \"kubernetes.io/projected/361ccc2b-5e39-4a12-ae52-3926f47f097d-kube-api-access-h26n9\") pod \"redhat-marketplace-5r5m9\" (UID: \"361ccc2b-5e39-4a12-ae52-3926f47f097d\") " pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:55:34 crc kubenswrapper[4756]: E1203 10:55:34.425416 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:34.925396257 +0000 UTC m=+145.955397501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.425909 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/361ccc2b-5e39-4a12-ae52-3926f47f097d-utilities\") pod \"redhat-marketplace-5r5m9\" (UID: \"361ccc2b-5e39-4a12-ae52-3926f47f097d\") " pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.426324 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/361ccc2b-5e39-4a12-ae52-3926f47f097d-catalog-content\") pod \"redhat-marketplace-5r5m9\" (UID: \"361ccc2b-5e39-4a12-ae52-3926f47f097d\") " pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.451740 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg" event={"ID":"52898585-7090-4ddc-b022-c439b71e241f","Type":"ContainerStarted","Data":"fca59b8642d5a934cb1cf1938810284e1f195b17ea475e1af1a6ad52dfe735a1"} Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.503434 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h26n9\" (UniqueName: \"kubernetes.io/projected/361ccc2b-5e39-4a12-ae52-3926f47f097d-kube-api-access-h26n9\") pod \"redhat-marketplace-5r5m9\" (UID: \"361ccc2b-5e39-4a12-ae52-3926f47f097d\") " pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.506483 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-hf4d2" podStartSLOduration=125.506451006 podStartE2EDuration="2m5.506451006s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:34.498465401 +0000 UTC m=+145.528466655" watchObservedRunningTime="2025-12-03 10:55:34.506451006 +0000 UTC m=+145.536452250" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.507861 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nccmk" podStartSLOduration=125.507854121 podStartE2EDuration="2m5.507854121s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:34.406455952 +0000 UTC m=+145.436457196" watchObservedRunningTime="2025-12-03 10:55:34.507854121 +0000 UTC m=+145.537855365" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.510417 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5zc78"] Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.511518 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.531227 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:34 crc kubenswrapper[4756]: E1203 10:55:34.537024 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:35.037001772 +0000 UTC m=+146.067003016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.559019 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zc78"] Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.561732 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82dpl" event={"ID":"35b0d395-c4ca-4956-9f3e-134814838598","Type":"ContainerStarted","Data":"99b48f65522abc0b8172c6aa8c02c9ee1a347d0903bb7207c285010d1229d9ee"} Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.571457 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fxgqg" podStartSLOduration=125.571426762 podStartE2EDuration="2m5.571426762s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:34.553806349 +0000 UTC m=+145.583807593" watchObservedRunningTime="2025-12-03 10:55:34.571426762 +0000 UTC m=+145.601427996" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.573606 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.634774 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.637394 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmh62\" (UniqueName: \"kubernetes.io/projected/72f3b1cb-9933-4c78-ad7a-d27f873da187-kube-api-access-wmh62\") pod \"redhat-marketplace-5zc78\" (UID: \"72f3b1cb-9933-4c78-ad7a-d27f873da187\") " pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.637489 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72f3b1cb-9933-4c78-ad7a-d27f873da187-catalog-content\") pod \"redhat-marketplace-5zc78\" (UID: \"72f3b1cb-9933-4c78-ad7a-d27f873da187\") " pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.648613 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72f3b1cb-9933-4c78-ad7a-d27f873da187-utilities\") pod \"redhat-marketplace-5zc78\" (UID: \"72f3b1cb-9933-4c78-ad7a-d27f873da187\") " pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.648874 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" event={"ID":"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d","Type":"ContainerStarted","Data":"d2e5a01ec3f0ccb3b924f79c49434400a257eca38240ed368785bddf040506d9"} Dec 03 10:55:34 crc kubenswrapper[4756]: E1203 10:55:34.649483 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:35.149437065 +0000 UTC m=+146.179438299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.685635 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-972dg" event={"ID":"a4514e32-bd45-46cf-b849-42c2d941f777","Type":"ContainerStarted","Data":"3d95fde0f215bc8fc27c9cee217bb1aa1c29eddda5524eba9a0ece88a36a4183"} Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.692702 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-nr6nx" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.692985 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2t5n7" podStartSLOduration=125.692945315 podStartE2EDuration="2m5.692945315s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:34.608243819 +0000 UTC m=+145.638245063" watchObservedRunningTime="2025-12-03 10:55:34.692945315 +0000 UTC m=+145.722946549" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.693761 4756 patch_prober.go:28] interesting pod/downloads-7954f5f757-nr6nx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.693833 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nr6nx" podUID="3ed2177c-9a5f-40c2-a7a5-559f4444548d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.723686 4756 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nwj2t container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.723762 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" podUID="d7d4a407-5935-4005-b569-0a2c596a98c9" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.736349 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nwqnq" podStartSLOduration=125.736326191 podStartE2EDuration="2m5.736326191s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:34.685311671 +0000 UTC m=+145.715312925" watchObservedRunningTime="2025-12-03 10:55:34.736326191 +0000 UTC m=+145.766327435" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.767211 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.767374 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmh62\" (UniqueName: \"kubernetes.io/projected/72f3b1cb-9933-4c78-ad7a-d27f873da187-kube-api-access-wmh62\") pod \"redhat-marketplace-5zc78\" (UID: \"72f3b1cb-9933-4c78-ad7a-d27f873da187\") " pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.767507 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72f3b1cb-9933-4c78-ad7a-d27f873da187-catalog-content\") pod \"redhat-marketplace-5zc78\" (UID: \"72f3b1cb-9933-4c78-ad7a-d27f873da187\") " pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.767554 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72f3b1cb-9933-4c78-ad7a-d27f873da187-utilities\") pod \"redhat-marketplace-5zc78\" (UID: \"72f3b1cb-9933-4c78-ad7a-d27f873da187\") " pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.768123 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72f3b1cb-9933-4c78-ad7a-d27f873da187-utilities\") pod \"redhat-marketplace-5zc78\" (UID: \"72f3b1cb-9933-4c78-ad7a-d27f873da187\") " pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:55:34 crc kubenswrapper[4756]: E1203 10:55:34.768421 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:35.268406906 +0000 UTC m=+146.298408150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.772518 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72f3b1cb-9933-4c78-ad7a-d27f873da187-catalog-content\") pod \"redhat-marketplace-5zc78\" (UID: \"72f3b1cb-9933-4c78-ad7a-d27f873da187\") " pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.824302 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmh62\" (UniqueName: \"kubernetes.io/projected/72f3b1cb-9933-4c78-ad7a-d27f873da187-kube-api-access-wmh62\") pod \"redhat-marketplace-5zc78\" (UID: \"72f3b1cb-9933-4c78-ad7a-d27f873da187\") " pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.837787 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:34 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:34 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:34 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.837849 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.860494 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-nr6nx" podStartSLOduration=125.860473728 podStartE2EDuration="2m5.860473728s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:34.792427103 +0000 UTC m=+145.822428347" watchObservedRunningTime="2025-12-03 10:55:34.860473728 +0000 UTC m=+145.890474972" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.879576 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:34 crc kubenswrapper[4756]: E1203 10:55:34.881001 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:35.380946182 +0000 UTC m=+146.410947576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.918560 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.943895 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hzzk5" podStartSLOduration=125.943867532 podStartE2EDuration="2m5.943867532s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:34.939362038 +0000 UTC m=+145.969363292" watchObservedRunningTime="2025-12-03 10:55:34.943867532 +0000 UTC m=+145.973868776" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.944192 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" podStartSLOduration=125.944187052 podStartE2EDuration="2m5.944187052s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:34.888553565 +0000 UTC m=+145.918554819" watchObservedRunningTime="2025-12-03 10:55:34.944187052 +0000 UTC m=+145.974188296" Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.992544 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:34 crc kubenswrapper[4756]: E1203 10:55:34.992996 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:35.492982582 +0000 UTC m=+146.522983826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:34 crc kubenswrapper[4756]: I1203 10:55:34.993881 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-msbqs" podStartSLOduration=125.993860319 podStartE2EDuration="2m5.993860319s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:34.990683557 +0000 UTC m=+146.020684801" watchObservedRunningTime="2025-12-03 10:55:34.993860319 +0000 UTC m=+146.023861553" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.093695 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:35 crc kubenswrapper[4756]: E1203 10:55:35.094054 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:35.594034009 +0000 UTC m=+146.624035253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.097935 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9zs5t"] Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.104723 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.125337 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.137678 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9zs5t"] Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.197232 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:35 crc kubenswrapper[4756]: E1203 10:55:35.197657 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:35.69764351 +0000 UTC m=+146.727644754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.217306 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" podStartSLOduration=126.217285277 podStartE2EDuration="2m6.217285277s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:35.159302825 +0000 UTC m=+146.189304069" watchObservedRunningTime="2025-12-03 10:55:35.217285277 +0000 UTC m=+146.247286521" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.261377 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb" podStartSLOduration=126.261355126 podStartE2EDuration="2m6.261355126s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:35.217782653 +0000 UTC m=+146.247783907" watchObservedRunningTime="2025-12-03 10:55:35.261355126 +0000 UTC m=+146.291356370" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.299783 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.300092 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1759c8af-db81-4841-a773-d8e3aaa6d9f2-catalog-content\") pod \"redhat-operators-9zs5t\" (UID: \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\") " pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.300130 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prxp6\" (UniqueName: \"kubernetes.io/projected/1759c8af-db81-4841-a773-d8e3aaa6d9f2-kube-api-access-prxp6\") pod \"redhat-operators-9zs5t\" (UID: \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\") " pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:55:35 crc kubenswrapper[4756]: E1203 10:55:35.300215 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:35.800192077 +0000 UTC m=+146.830193321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.300247 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1759c8af-db81-4841-a773-d8e3aaa6d9f2-utilities\") pod \"redhat-operators-9zs5t\" (UID: \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\") " pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.403897 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.403964 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1759c8af-db81-4841-a773-d8e3aaa6d9f2-utilities\") pod \"redhat-operators-9zs5t\" (UID: \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\") " pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.404034 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1759c8af-db81-4841-a773-d8e3aaa6d9f2-catalog-content\") pod \"redhat-operators-9zs5t\" (UID: \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\") " pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.404060 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prxp6\" (UniqueName: \"kubernetes.io/projected/1759c8af-db81-4841-a773-d8e3aaa6d9f2-kube-api-access-prxp6\") pod \"redhat-operators-9zs5t\" (UID: \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\") " pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:55:35 crc kubenswrapper[4756]: E1203 10:55:35.404773 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:35.904752326 +0000 UTC m=+146.934753570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.404782 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1759c8af-db81-4841-a773-d8e3aaa6d9f2-utilities\") pod \"redhat-operators-9zs5t\" (UID: \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\") " pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.407686 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1759c8af-db81-4841-a773-d8e3aaa6d9f2-catalog-content\") pod \"redhat-operators-9zs5t\" (UID: \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\") " pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.455520 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prxp6\" (UniqueName: \"kubernetes.io/projected/1759c8af-db81-4841-a773-d8e3aaa6d9f2-kube-api-access-prxp6\") pod \"redhat-operators-9zs5t\" (UID: \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\") " pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.471358 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ghq75"] Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.474889 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.505931 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:35 crc kubenswrapper[4756]: E1203 10:55:35.507127 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:36.007101277 +0000 UTC m=+147.037102521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.508722 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ghq75"] Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.609599 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37203e5a-c71e-4397-b96c-8f152834e488-utilities\") pod \"redhat-operators-ghq75\" (UID: \"37203e5a-c71e-4397-b96c-8f152834e488\") " pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.610867 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.610986 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8xj6\" (UniqueName: \"kubernetes.io/projected/37203e5a-c71e-4397-b96c-8f152834e488-kube-api-access-m8xj6\") pod \"redhat-operators-ghq75\" (UID: \"37203e5a-c71e-4397-b96c-8f152834e488\") " pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.611031 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37203e5a-c71e-4397-b96c-8f152834e488-catalog-content\") pod \"redhat-operators-ghq75\" (UID: \"37203e5a-c71e-4397-b96c-8f152834e488\") " pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:55:35 crc kubenswrapper[4756]: E1203 10:55:35.611374 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:36.111360688 +0000 UTC m=+147.141361932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.712422 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.712861 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8xj6\" (UniqueName: \"kubernetes.io/projected/37203e5a-c71e-4397-b96c-8f152834e488-kube-api-access-m8xj6\") pod \"redhat-operators-ghq75\" (UID: \"37203e5a-c71e-4397-b96c-8f152834e488\") " pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.712926 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37203e5a-c71e-4397-b96c-8f152834e488-catalog-content\") pod \"redhat-operators-ghq75\" (UID: \"37203e5a-c71e-4397-b96c-8f152834e488\") " pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:55:35 crc kubenswrapper[4756]: E1203 10:55:35.713005 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:36.212971004 +0000 UTC m=+147.242972258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.713059 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37203e5a-c71e-4397-b96c-8f152834e488-utilities\") pod \"redhat-operators-ghq75\" (UID: \"37203e5a-c71e-4397-b96c-8f152834e488\") " pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.713476 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37203e5a-c71e-4397-b96c-8f152834e488-catalog-content\") pod \"redhat-operators-ghq75\" (UID: \"37203e5a-c71e-4397-b96c-8f152834e488\") " pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.713672 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37203e5a-c71e-4397-b96c-8f152834e488-utilities\") pod \"redhat-operators-ghq75\" (UID: \"37203e5a-c71e-4397-b96c-8f152834e488\") " pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.747770 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.815115 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:35 crc kubenswrapper[4756]: E1203 10:55:35.815726 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:36.315705457 +0000 UTC m=+147.345706701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.841087 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8xj6\" (UniqueName: \"kubernetes.io/projected/37203e5a-c71e-4397-b96c-8f152834e488-kube-api-access-m8xj6\") pod \"redhat-operators-ghq75\" (UID: \"37203e5a-c71e-4397-b96c-8f152834e488\") " pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.849813 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:35 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:35 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:35 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.849880 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.855624 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5r5m9"] Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.855703 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" event={"ID":"8b49366b-8abe-4dde-b64b-fc5d34106174","Type":"ContainerStarted","Data":"53bfdf32af40a79ce9482dfdf2ec90c696c6299e399683d85464c83119d899af"} Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.874228 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.881302 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zc78"] Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.911353 4756 generic.go:334] "Generic (PLEG): container finished" podID="a4514e32-bd45-46cf-b849-42c2d941f777" containerID="4dfdad50965a1d21bd07a7865e20eeb30ff47fe48875315b08a319658ec87823" exitCode=0 Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.911447 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-972dg" event={"ID":"a4514e32-bd45-46cf-b849-42c2d941f777","Type":"ContainerDied","Data":"4dfdad50965a1d21bd07a7865e20eeb30ff47fe48875315b08a319658ec87823"} Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.916168 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:35 crc kubenswrapper[4756]: E1203 10:55:35.917764 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:36.417731286 +0000 UTC m=+147.447732540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.937206 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tzhjl" podStartSLOduration=126.937177257 podStartE2EDuration="2m6.937177257s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:35.930933358 +0000 UTC m=+146.960934612" watchObservedRunningTime="2025-12-03 10:55:35.937177257 +0000 UTC m=+146.967178521" Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.996561 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4fmqt" event={"ID":"d8272209-b1b5-4dac-88d1-ca60b3f50256","Type":"ContainerStarted","Data":"3cf8075ba0f227bf2c56239bb823636035a64004d6bbf42b16396b7d0c474b17"} Dec 03 10:55:35 crc kubenswrapper[4756]: I1203 10:55:35.997151 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-4fmqt" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.019106 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:36 crc kubenswrapper[4756]: E1203 10:55:36.019683 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:36.519663503 +0000 UTC m=+147.549664747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.068151 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-4fmqt" podStartSLOduration=11.068119951 podStartE2EDuration="11.068119951s" podCreationTimestamp="2025-12-03 10:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:36.066742387 +0000 UTC m=+147.096743631" watchObservedRunningTime="2025-12-03 10:55:36.068119951 +0000 UTC m=+147.098121195" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.073571 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xndxw" event={"ID":"5ca956b0-0bf9-4c47-97fb-24e5141cf2bf","Type":"ContainerStarted","Data":"9d4207ae54e8a2d4720dedbafc7ac1c09636d325ead9ee259cb1cef5e1546c46"} Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.106470 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xkdl4" event={"ID":"618da479-bdd6-48c0-8d67-4d08154c9209","Type":"ContainerStarted","Data":"c78ca22f77062e46a7ea498ad7552307a2a6d72c8d343fcb50f5965575c69b83"} Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.120386 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:36 crc kubenswrapper[4756]: E1203 10:55:36.121037 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:36.620993701 +0000 UTC m=+147.650994945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.136917 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tqdqx" event={"ID":"25bd25a7-d2cc-4b0c-969e-2b8cbd95b444","Type":"ContainerStarted","Data":"bfb0c95244847349ca26eccde7ae858fe4e0dd0d157fd3bec19fa03b421e1bbc"} Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.165155 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" event={"ID":"2237b661-da97-455e-b4ec-9b42fbcf6cc8","Type":"ContainerStarted","Data":"1818ba72b67c17b33e373886943da02c8c1d70943d58092492077431f16c964b"} Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.165878 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.188533 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj" event={"ID":"2a0fc109-2cad-4fb3-a5ed-16c590828ed3","Type":"ContainerStarted","Data":"c12a75a5f9e886077010e1d39ff17ce8f497c605b598f70d84cf8df6f4f3f07b"} Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.203507 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xndxw" podStartSLOduration=127.203480166 podStartE2EDuration="2m7.203480166s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:36.188341753 +0000 UTC m=+147.218342987" watchObservedRunningTime="2025-12-03 10:55:36.203480166 +0000 UTC m=+147.233481420" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.208334 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9mtks" event={"ID":"d1bab4a4-a0e1-4445-9323-c3ea6c986f1d","Type":"ContainerStarted","Data":"3894ab5cee1dc7025062aa7a5b9a2fc0a995ff7045546f48a965b3e5526e0592"} Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.211243 4756 generic.go:334] "Generic (PLEG): container finished" podID="6ce33fc3-643b-4347-aa7f-31fcfe461b1d" containerID="ef9c857ca5a9d3dd2f7c6ca6fa9cd5d612d9bfd31a00a6e81842ba22662ebe8e" exitCode=0 Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.211307 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" event={"ID":"6ce33fc3-643b-4347-aa7f-31fcfe461b1d","Type":"ContainerDied","Data":"ef9c857ca5a9d3dd2f7c6ca6fa9cd5d612d9bfd31a00a6e81842ba22662ebe8e"} Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.226021 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:36 crc kubenswrapper[4756]: E1203 10:55:36.227653 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:36.727629978 +0000 UTC m=+147.757631412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.240411 4756 generic.go:334] "Generic (PLEG): container finished" podID="89c85576-c4b8-4c93-8ce0-de9654286cff" containerID="96e1b16c999e9b147402f934e362a5bb7b45653a2155bafd125a4e44b48975d3" exitCode=0 Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.240499 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l688d" event={"ID":"89c85576-c4b8-4c93-8ce0-de9654286cff","Type":"ContainerDied","Data":"96e1b16c999e9b147402f934e362a5bb7b45653a2155bafd125a4e44b48975d3"} Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.240544 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l688d" event={"ID":"89c85576-c4b8-4c93-8ce0-de9654286cff","Type":"ContainerStarted","Data":"09ee05cc79a263d7b91339430052c6c9a00be12e008ba4ad2a5e1237e1daeb78"} Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.307590 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rdwph" event={"ID":"8d80e0dc-bc00-4d25-8fbd-4eeb2976a9ba","Type":"ContainerStarted","Data":"2697f4003fa4c9f0567ab015db79092891fa63fdd9b5d360e53b3b31716d138f"} Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.308156 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-xkdl4" podStartSLOduration=127.30813437 podStartE2EDuration="2m7.30813437s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:36.306709224 +0000 UTC m=+147.336710468" watchObservedRunningTime="2025-12-03 10:55:36.30813437 +0000 UTC m=+147.338135614" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.331916 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:36 crc kubenswrapper[4756]: E1203 10:55:36.335140 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:36.835108841 +0000 UTC m=+147.865110245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.382216 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" event={"ID":"7399790d-a82c-454a-aef1-10bb460bfe73","Type":"ContainerStarted","Data":"05cbd7c4b957590d36c07ddbdd315cea2ad064c703d5e24d29ccf8f94e4b2cd2"} Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.411975 4756 generic.go:334] "Generic (PLEG): container finished" podID="35b0d395-c4ca-4956-9f3e-134814838598" containerID="55c795da3922d7d540bd077dedfdbcba327682d286428ae5b450626fcfd69cf5" exitCode=0 Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.412859 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82dpl" event={"ID":"35b0d395-c4ca-4956-9f3e-134814838598","Type":"ContainerDied","Data":"55c795da3922d7d540bd077dedfdbcba327682d286428ae5b450626fcfd69cf5"} Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.413416 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-tqdqx" podStartSLOduration=127.413391772 podStartE2EDuration="2m7.413391772s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:36.37231702 +0000 UTC m=+147.402318264" watchObservedRunningTime="2025-12-03 10:55:36.413391772 +0000 UTC m=+147.443393006" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.414928 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" podStartSLOduration=127.414919362 podStartE2EDuration="2m7.414919362s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:36.412474043 +0000 UTC m=+147.442475277" watchObservedRunningTime="2025-12-03 10:55:36.414919362 +0000 UTC m=+147.444920606" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.415673 4756 patch_prober.go:28] interesting pod/downloads-7954f5f757-nr6nx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.415743 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nr6nx" podUID="3ed2177c-9a5f-40c2-a7a5-559f4444548d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.431558 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.435612 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:36 crc kubenswrapper[4756]: E1203 10:55:36.437275 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:36.937258795 +0000 UTC m=+147.967260039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.446273 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nwj2t" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.543215 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:36 crc kubenswrapper[4756]: E1203 10:55:36.545266 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.045245995 +0000 UTC m=+148.075247239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.570986 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" podStartSLOduration=127.570930616 podStartE2EDuration="2m7.570930616s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:36.548793439 +0000 UTC m=+147.578794683" watchObservedRunningTime="2025-12-03 10:55:36.570930616 +0000 UTC m=+147.600931870" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.591041 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wczzj" podStartSLOduration=127.591006707 podStartE2EDuration="2m7.591006707s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:36.589715226 +0000 UTC m=+147.619716480" watchObservedRunningTime="2025-12-03 10:55:36.591006707 +0000 UTC m=+147.621007951" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.618709 4756 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.666038 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:36 crc kubenswrapper[4756]: E1203 10:55:36.667486 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.167443509 +0000 UTC m=+148.197444753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.775160 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:36 crc kubenswrapper[4756]: E1203 10:55:36.776332 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.276306388 +0000 UTC m=+148.306307632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.809380 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9zs5t"] Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.836355 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:36 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:36 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:36 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.836443 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.881106 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:36 crc kubenswrapper[4756]: E1203 10:55:36.881546 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.38153229 +0000 UTC m=+148.411533524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.893870 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.894808 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.898682 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.907728 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.975370 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.981902 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.982269 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84376a26-ffd1-4ec9-8a10-c164b064a67a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"84376a26-ffd1-4ec9-8a10-c164b064a67a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 10:55:36 crc kubenswrapper[4756]: I1203 10:55:36.982321 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84376a26-ffd1-4ec9-8a10-c164b064a67a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"84376a26-ffd1-4ec9-8a10-c164b064a67a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 10:55:36 crc kubenswrapper[4756]: E1203 10:55:36.982568 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.482545557 +0000 UTC m=+148.512546801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.075784 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ghq75"] Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.084416 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.084499 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84376a26-ffd1-4ec9-8a10-c164b064a67a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"84376a26-ffd1-4ec9-8a10-c164b064a67a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.084565 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.084591 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.084665 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84376a26-ffd1-4ec9-8a10-c164b064a67a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"84376a26-ffd1-4ec9-8a10-c164b064a67a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.084745 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84376a26-ffd1-4ec9-8a10-c164b064a67a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"84376a26-ffd1-4ec9-8a10-c164b064a67a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 10:55:37 crc kubenswrapper[4756]: E1203 10:55:37.085129 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.585113864 +0000 UTC m=+148.615115108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.086492 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.098154 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.123615 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84376a26-ffd1-4ec9-8a10-c164b064a67a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"84376a26-ffd1-4ec9-8a10-c164b064a67a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.181334 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.186142 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.186349 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.186401 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:55:37 crc kubenswrapper[4756]: E1203 10:55:37.187597 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.687551967 +0000 UTC m=+148.717553211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.192288 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.192884 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.261108 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.287811 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:37 crc kubenswrapper[4756]: E1203 10:55:37.288182 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.788167322 +0000 UTC m=+148.818168566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.330740 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.392050 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.392169 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-secret-volume\") pod \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\" (UID: \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\") " Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.392207 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-config-volume\") pod \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\" (UID: \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\") " Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.392310 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dc4f\" (UniqueName: \"kubernetes.io/projected/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-kube-api-access-7dc4f\") pod \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\" (UID: \"6ce33fc3-643b-4347-aa7f-31fcfe461b1d\") " Dec 03 10:55:37 crc kubenswrapper[4756]: E1203 10:55:37.394129 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.894109336 +0000 UTC m=+148.924110580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.394135 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-config-volume" (OuterVolumeSpecName: "config-volume") pod "6ce33fc3-643b-4347-aa7f-31fcfe461b1d" (UID: "6ce33fc3-643b-4347-aa7f-31fcfe461b1d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.408809 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6ce33fc3-643b-4347-aa7f-31fcfe461b1d" (UID: "6ce33fc3-643b-4347-aa7f-31fcfe461b1d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.408909 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.409201 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.409538 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-kube-api-access-7dc4f" (OuterVolumeSpecName: "kube-api-access-7dc4f") pod "6ce33fc3-643b-4347-aa7f-31fcfe461b1d" (UID: "6ce33fc3-643b-4347-aa7f-31fcfe461b1d"). InnerVolumeSpecName "kube-api-access-7dc4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.442576 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" event={"ID":"6ce33fc3-643b-4347-aa7f-31fcfe461b1d","Type":"ContainerDied","Data":"80323519e4d8c941aac41daae54586d01de0aa3911d5519c9067e7b1c43ca5e6"} Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.442636 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80323519e4d8c941aac41daae54586d01de0aa3911d5519c9067e7b1c43ca5e6" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.442716 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.462830 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.472291 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.481078 4756 generic.go:334] "Generic (PLEG): container finished" podID="72f3b1cb-9933-4c78-ad7a-d27f873da187" containerID="6feeb57ff3e34510fd89605c4581a34c1aadf136bf636cf71cd2eae6ba7dda2e" exitCode=0 Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.481153 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zc78" event={"ID":"72f3b1cb-9933-4c78-ad7a-d27f873da187","Type":"ContainerDied","Data":"6feeb57ff3e34510fd89605c4581a34c1aadf136bf636cf71cd2eae6ba7dda2e"} Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.481180 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zc78" event={"ID":"72f3b1cb-9933-4c78-ad7a-d27f873da187","Type":"ContainerStarted","Data":"915a5d5e31325fe140a71f5dc8d762caae6e3813e74940d4de4fee471bd8a513"} Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.483751 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghq75" event={"ID":"37203e5a-c71e-4397-b96c-8f152834e488","Type":"ContainerStarted","Data":"b4fe77b862cb5dfbee31840bb5de2187642d69e64cb01fc02fbf24013a9ffd37"} Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.495012 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.495134 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dc4f\" (UniqueName: \"kubernetes.io/projected/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-kube-api-access-7dc4f\") on node \"crc\" DevicePath \"\"" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.495151 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.495161 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ce33fc3-643b-4347-aa7f-31fcfe461b1d-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 10:55:37 crc kubenswrapper[4756]: E1203 10:55:37.495396 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 10:55:37.995378742 +0000 UTC m=+149.025379986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mdbh6" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.527424 4756 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-03T10:55:36.618746614Z","Handler":null,"Name":""} Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.538906 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" event={"ID":"1fd93ab4-e1c0-456f-8741-b50c983c8a89","Type":"ContainerStarted","Data":"f61951a34188eae12d328a8cfd7e6f4f842091350a7503ce560cd199824e9fe9"} Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.574466 4756 generic.go:334] "Generic (PLEG): container finished" podID="361ccc2b-5e39-4a12-ae52-3926f47f097d" containerID="bcab86c31974c5d462b424822ba52ab31329a285563da53053b02d58b39ded35" exitCode=0 Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.575534 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5r5m9" event={"ID":"361ccc2b-5e39-4a12-ae52-3926f47f097d","Type":"ContainerDied","Data":"bcab86c31974c5d462b424822ba52ab31329a285563da53053b02d58b39ded35"} Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.575574 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5r5m9" event={"ID":"361ccc2b-5e39-4a12-ae52-3926f47f097d","Type":"ContainerStarted","Data":"91e45115cbad4726905b52348ae1ac47e14fc63ce22d4c2201d4c5ea523e78fa"} Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.583842 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zs5t" event={"ID":"1759c8af-db81-4841-a773-d8e3aaa6d9f2","Type":"ContainerStarted","Data":"6b5b613bc49f437352e5ef605b6ca13416c3080f23d563535390276bed925230"} Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.596491 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:37 crc kubenswrapper[4756]: E1203 10:55:37.600540 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 10:55:38.1005127 +0000 UTC m=+149.130513944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.654188 4756 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.654240 4756 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.656277 4756 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xndxw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 10:55:37 crc kubenswrapper[4756]: [+]log ok Dec 03 10:55:37 crc kubenswrapper[4756]: [+]etcd ok Dec 03 10:55:37 crc kubenswrapper[4756]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 10:55:37 crc kubenswrapper[4756]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 10:55:37 crc kubenswrapper[4756]: [+]poststarthook/max-in-flight-filter ok Dec 03 10:55:37 crc kubenswrapper[4756]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 10:55:37 crc kubenswrapper[4756]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 03 10:55:37 crc kubenswrapper[4756]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 03 10:55:37 crc kubenswrapper[4756]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 03 10:55:37 crc kubenswrapper[4756]: [+]poststarthook/project.openshift.io-projectcache ok Dec 03 10:55:37 crc kubenswrapper[4756]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 03 10:55:37 crc kubenswrapper[4756]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Dec 03 10:55:37 crc kubenswrapper[4756]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 03 10:55:37 crc kubenswrapper[4756]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 03 10:55:37 crc kubenswrapper[4756]: livez check failed Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.656325 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xndxw" podUID="5ca956b0-0bf9-4c47-97fb-24e5141cf2bf" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.689985 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.703194 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.759085 4756 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.759145 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.761666 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:55:37 crc kubenswrapper[4756]: W1203 10:55:37.790000 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod84376a26_ffd1_4ec9_8a10_c164b064a67a.slice/crio-0bbe6550b0cbef124ba319db34bd4bd081f90ea48adcb19af3b228aabd283b7f WatchSource:0}: Error finding container 0bbe6550b0cbef124ba319db34bd4bd081f90ea48adcb19af3b228aabd283b7f: Status 404 returned error can't find the container with id 0bbe6550b0cbef124ba319db34bd4bd081f90ea48adcb19af3b228aabd283b7f Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.817808 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.818601 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.830966 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:37 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:37 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:37 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.831029 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.900487 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 10:55:37 crc kubenswrapper[4756]: I1203 10:55:37.920481 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mdbh6\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.018322 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.104122 4756 patch_prober.go:28] interesting pod/downloads-7954f5f757-nr6nx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.104232 4756 patch_prober.go:28] interesting pod/downloads-7954f5f757-nr6nx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.104301 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-nr6nx" podUID="3ed2177c-9a5f-40c2-a7a5-559f4444548d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.104234 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nr6nx" podUID="3ed2177c-9a5f-40c2-a7a5-559f4444548d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.104822 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.244360 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.440175 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb" Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.585075 4756 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-nws44 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.585171 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" podUID="2237b661-da97-455e-b4ec-9b42fbcf6cc8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.652973 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nws44" Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.694829 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" event={"ID":"1fd93ab4-e1c0-456f-8741-b50c983c8a89","Type":"ContainerStarted","Data":"aeaef37f8a1e5d7673e672a69fc3df3915380b448f76208b97679d4f28a94f1e"} Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.706256 4756 generic.go:334] "Generic (PLEG): container finished" podID="1759c8af-db81-4841-a773-d8e3aaa6d9f2" containerID="e8749b11580d3139bbf8a353b19ef0eaefdff075949832f18f595f4faf0e9b3c" exitCode=0 Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.707194 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zs5t" event={"ID":"1759c8af-db81-4841-a773-d8e3aaa6d9f2","Type":"ContainerDied","Data":"e8749b11580d3139bbf8a353b19ef0eaefdff075949832f18f595f4faf0e9b3c"} Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.711018 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"622bf55a3d06f51f36466b8e436bf14469a311def7a51d9ad8e5aeba1fd0ef27"} Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.727812 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"84376a26-ffd1-4ec9-8a10-c164b064a67a","Type":"ContainerStarted","Data":"0bbe6550b0cbef124ba319db34bd4bd081f90ea48adcb19af3b228aabd283b7f"} Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.743540 4756 generic.go:334] "Generic (PLEG): container finished" podID="37203e5a-c71e-4397-b96c-8f152834e488" containerID="eab2eccb8a7be8e821f1039ce35e0c14a7e4f52c6038df272e52c6bb1a7e0d8e" exitCode=0 Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.744657 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghq75" event={"ID":"37203e5a-c71e-4397-b96c-8f152834e488","Type":"ContainerDied","Data":"eab2eccb8a7be8e821f1039ce35e0c14a7e4f52c6038df272e52c6bb1a7e0d8e"} Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.830308 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:38 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:38 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:38 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.830397 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.971293 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.971351 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:38 crc kubenswrapper[4756]: I1203 10:55:38.979862 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.049058 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.049598 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.051034 4756 patch_prober.go:28] interesting pod/console-f9d7485db-hf4d2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.051108 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hf4d2" podUID="0e6b1897-6a37-4181-bcb1-876d1205f2ab" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.251107 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.359202 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mdbh6"] Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.784269 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"84376a26-ffd1-4ec9-8a10-c164b064a67a","Type":"ContainerStarted","Data":"0a7532b96ea9d7ba532bb808dc92b6d4bb118cb9f2e82dd4a59a53e533405336"} Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.806602 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"167cf30c1c48cf2be8b7f6064d239c3971eaad46de033bc0a8b91ae0f3d246d6"} Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.806676 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f083ba0bc25ec1178eb72cf1f661a9d5bb4a8a0f2fded209bd6de9046e67d854"} Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.807246 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.814720 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.814690912 podStartE2EDuration="3.814690912s" podCreationTimestamp="2025-12-03 10:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:39.810418316 +0000 UTC m=+150.840419560" watchObservedRunningTime="2025-12-03 10:55:39.814690912 +0000 UTC m=+150.844692156" Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.819543 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" event={"ID":"1fd93ab4-e1c0-456f-8741-b50c983c8a89","Type":"ContainerStarted","Data":"209bd920ef55962b914f7840ba55abce9cb4b27fe1ae6e63a26795b39471b30c"} Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.825560 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:39 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:39 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:39 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.825647 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.839137 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" event={"ID":"96867856-6fdb-4b8a-b19e-54bc30bcc607","Type":"ContainerStarted","Data":"630c33837cbd642affa9fc29651918cfcc167d785551eee9f3d06af48d32932c"} Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.845444 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9e902649df400eda0e5f8c9a1a4ce5d8084b9df2867b854507b7c78bc808818d"} Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.845520 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"700e2858c2971ef6b390286efb44f73adb3459c95b86f223fec1a9573cd4230a"} Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.858225 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-z9c8s" podStartSLOduration=14.858201513000001 podStartE2EDuration="14.858201513s" podCreationTimestamp="2025-12-03 10:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:39.855147585 +0000 UTC m=+150.885148829" watchObservedRunningTime="2025-12-03 10:55:39.858201513 +0000 UTC m=+150.888202757" Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.865139 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1ea4a6b90d868ae6f4ff957417dd7286660f794126ebd4d2aa4d0396a996540b"} Dec 03 10:55:39 crc kubenswrapper[4756]: I1203 10:55:39.880781 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-88rfz" Dec 03 10:55:40 crc kubenswrapper[4756]: I1203 10:55:40.820393 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:40 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:40 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:40 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:40 crc kubenswrapper[4756]: I1203 10:55:40.820495 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:40 crc kubenswrapper[4756]: I1203 10:55:40.900456 4756 generic.go:334] "Generic (PLEG): container finished" podID="84376a26-ffd1-4ec9-8a10-c164b064a67a" containerID="0a7532b96ea9d7ba532bb808dc92b6d4bb118cb9f2e82dd4a59a53e533405336" exitCode=0 Dec 03 10:55:40 crc kubenswrapper[4756]: I1203 10:55:40.900672 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"84376a26-ffd1-4ec9-8a10-c164b064a67a","Type":"ContainerDied","Data":"0a7532b96ea9d7ba532bb808dc92b6d4bb118cb9f2e82dd4a59a53e533405336"} Dec 03 10:55:40 crc kubenswrapper[4756]: I1203 10:55:40.908879 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" event={"ID":"96867856-6fdb-4b8a-b19e-54bc30bcc607","Type":"ContainerStarted","Data":"86590e4837a1a00770c1b148d83ae428cec8595caeeb08938267cb99ccc1fc08"} Dec 03 10:55:40 crc kubenswrapper[4756]: I1203 10:55:40.909913 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.476226 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" podStartSLOduration=132.476204848 podStartE2EDuration="2m12.476204848s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:40.939909593 +0000 UTC m=+151.969910857" watchObservedRunningTime="2025-12-03 10:55:41.476204848 +0000 UTC m=+152.506206092" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.489023 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 10:55:41 crc kubenswrapper[4756]: E1203 10:55:41.489566 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce33fc3-643b-4347-aa7f-31fcfe461b1d" containerName="collect-profiles" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.489600 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce33fc3-643b-4347-aa7f-31fcfe461b1d" containerName="collect-profiles" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.489986 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ce33fc3-643b-4347-aa7f-31fcfe461b1d" containerName="collect-profiles" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.490737 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.492860 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.496588 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.497672 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.617394 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c748b701-65f2-4a2c-9f70-06e3bf04fc1d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c748b701-65f2-4a2c-9f70-06e3bf04fc1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.617490 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c748b701-65f2-4a2c-9f70-06e3bf04fc1d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c748b701-65f2-4a2c-9f70-06e3bf04fc1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.719046 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c748b701-65f2-4a2c-9f70-06e3bf04fc1d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c748b701-65f2-4a2c-9f70-06e3bf04fc1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.719194 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c748b701-65f2-4a2c-9f70-06e3bf04fc1d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c748b701-65f2-4a2c-9f70-06e3bf04fc1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.719212 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c748b701-65f2-4a2c-9f70-06e3bf04fc1d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c748b701-65f2-4a2c-9f70-06e3bf04fc1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.759842 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c748b701-65f2-4a2c-9f70-06e3bf04fc1d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c748b701-65f2-4a2c-9f70-06e3bf04fc1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.819894 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:41 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:41 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:41 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.820011 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:41 crc kubenswrapper[4756]: I1203 10:55:41.826266 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.223393 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 10:55:42 crc kubenswrapper[4756]: W1203 10:55:42.296296 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc748b701_65f2_4a2c_9f70_06e3bf04fc1d.slice/crio-b367bb21ce3703089dd895ff12b39d5fd6142108dd53ec855d899b32ff5cb91b WatchSource:0}: Error finding container b367bb21ce3703089dd895ff12b39d5fd6142108dd53ec855d899b32ff5cb91b: Status 404 returned error can't find the container with id b367bb21ce3703089dd895ff12b39d5fd6142108dd53ec855d899b32ff5cb91b Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.405461 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.428311 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xndxw" Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.438639 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.545438 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84376a26-ffd1-4ec9-8a10-c164b064a67a-kube-api-access\") pod \"84376a26-ffd1-4ec9-8a10-c164b064a67a\" (UID: \"84376a26-ffd1-4ec9-8a10-c164b064a67a\") " Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.545602 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84376a26-ffd1-4ec9-8a10-c164b064a67a-kubelet-dir\") pod \"84376a26-ffd1-4ec9-8a10-c164b064a67a\" (UID: \"84376a26-ffd1-4ec9-8a10-c164b064a67a\") " Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.545935 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84376a26-ffd1-4ec9-8a10-c164b064a67a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "84376a26-ffd1-4ec9-8a10-c164b064a67a" (UID: "84376a26-ffd1-4ec9-8a10-c164b064a67a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.553343 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84376a26-ffd1-4ec9-8a10-c164b064a67a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "84376a26-ffd1-4ec9-8a10-c164b064a67a" (UID: "84376a26-ffd1-4ec9-8a10-c164b064a67a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.647965 4756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84376a26-ffd1-4ec9-8a10-c164b064a67a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.648007 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84376a26-ffd1-4ec9-8a10-c164b064a67a-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.821234 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:42 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:42 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:42 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.821300 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.994350 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"84376a26-ffd1-4ec9-8a10-c164b064a67a","Type":"ContainerDied","Data":"0bbe6550b0cbef124ba319db34bd4bd081f90ea48adcb19af3b228aabd283b7f"} Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.994413 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bbe6550b0cbef124ba319db34bd4bd081f90ea48adcb19af3b228aabd283b7f" Dec 03 10:55:42 crc kubenswrapper[4756]: I1203 10:55:42.994509 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 10:55:43 crc kubenswrapper[4756]: I1203 10:55:43.012651 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c748b701-65f2-4a2c-9f70-06e3bf04fc1d","Type":"ContainerStarted","Data":"b367bb21ce3703089dd895ff12b39d5fd6142108dd53ec855d899b32ff5cb91b"} Dec 03 10:55:43 crc kubenswrapper[4756]: I1203 10:55:43.820840 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:43 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:43 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:43 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:43 crc kubenswrapper[4756]: I1203 10:55:43.820986 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:44 crc kubenswrapper[4756]: I1203 10:55:44.062110 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-4fmqt" Dec 03 10:55:44 crc kubenswrapper[4756]: I1203 10:55:44.820626 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:44 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:44 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:44 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:44 crc kubenswrapper[4756]: I1203 10:55:44.820713 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:45 crc kubenswrapper[4756]: I1203 10:55:45.072600 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c748b701-65f2-4a2c-9f70-06e3bf04fc1d","Type":"ContainerStarted","Data":"b70e256f70f329ef98cb9cd0dea0885ab3a658680d8ca176c1bfae4ff560630a"} Dec 03 10:55:45 crc kubenswrapper[4756]: I1203 10:55:45.090361 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.090333938 podStartE2EDuration="4.090333938s" podCreationTimestamp="2025-12-03 10:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:55:45.085494443 +0000 UTC m=+156.115495687" watchObservedRunningTime="2025-12-03 10:55:45.090333938 +0000 UTC m=+156.120335182" Dec 03 10:55:45 crc kubenswrapper[4756]: I1203 10:55:45.820317 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:45 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:45 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:45 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:45 crc kubenswrapper[4756]: I1203 10:55:45.820438 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:46 crc kubenswrapper[4756]: I1203 10:55:46.820818 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:46 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:46 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:46 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:46 crc kubenswrapper[4756]: I1203 10:55:46.821599 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:47 crc kubenswrapper[4756]: I1203 10:55:47.095442 4756 generic.go:334] "Generic (PLEG): container finished" podID="c748b701-65f2-4a2c-9f70-06e3bf04fc1d" containerID="b70e256f70f329ef98cb9cd0dea0885ab3a658680d8ca176c1bfae4ff560630a" exitCode=0 Dec 03 10:55:47 crc kubenswrapper[4756]: I1203 10:55:47.095494 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c748b701-65f2-4a2c-9f70-06e3bf04fc1d","Type":"ContainerDied","Data":"b70e256f70f329ef98cb9cd0dea0885ab3a658680d8ca176c1bfae4ff560630a"} Dec 03 10:55:47 crc kubenswrapper[4756]: I1203 10:55:47.820228 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:47 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:47 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:47 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:47 crc kubenswrapper[4756]: I1203 10:55:47.820301 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:48 crc kubenswrapper[4756]: I1203 10:55:48.104840 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-nr6nx" Dec 03 10:55:48 crc kubenswrapper[4756]: I1203 10:55:48.357053 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 10:55:48 crc kubenswrapper[4756]: I1203 10:55:48.461969 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c748b701-65f2-4a2c-9f70-06e3bf04fc1d-kube-api-access\") pod \"c748b701-65f2-4a2c-9f70-06e3bf04fc1d\" (UID: \"c748b701-65f2-4a2c-9f70-06e3bf04fc1d\") " Dec 03 10:55:48 crc kubenswrapper[4756]: I1203 10:55:48.462088 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c748b701-65f2-4a2c-9f70-06e3bf04fc1d-kubelet-dir\") pod \"c748b701-65f2-4a2c-9f70-06e3bf04fc1d\" (UID: \"c748b701-65f2-4a2c-9f70-06e3bf04fc1d\") " Dec 03 10:55:48 crc kubenswrapper[4756]: I1203 10:55:48.462172 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c748b701-65f2-4a2c-9f70-06e3bf04fc1d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c748b701-65f2-4a2c-9f70-06e3bf04fc1d" (UID: "c748b701-65f2-4a2c-9f70-06e3bf04fc1d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:55:48 crc kubenswrapper[4756]: I1203 10:55:48.462709 4756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c748b701-65f2-4a2c-9f70-06e3bf04fc1d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 10:55:48 crc kubenswrapper[4756]: I1203 10:55:48.483617 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c748b701-65f2-4a2c-9f70-06e3bf04fc1d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c748b701-65f2-4a2c-9f70-06e3bf04fc1d" (UID: "c748b701-65f2-4a2c-9f70-06e3bf04fc1d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:55:48 crc kubenswrapper[4756]: I1203 10:55:48.564083 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c748b701-65f2-4a2c-9f70-06e3bf04fc1d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 10:55:48 crc kubenswrapper[4756]: I1203 10:55:48.822457 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:48 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:48 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:48 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:48 crc kubenswrapper[4756]: I1203 10:55:48.822864 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:49 crc kubenswrapper[4756]: I1203 10:55:49.049432 4756 patch_prober.go:28] interesting pod/console-f9d7485db-hf4d2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 03 10:55:49 crc kubenswrapper[4756]: I1203 10:55:49.049598 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hf4d2" podUID="0e6b1897-6a37-4181-bcb1-876d1205f2ab" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 03 10:55:49 crc kubenswrapper[4756]: I1203 10:55:49.122477 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c748b701-65f2-4a2c-9f70-06e3bf04fc1d","Type":"ContainerDied","Data":"b367bb21ce3703089dd895ff12b39d5fd6142108dd53ec855d899b32ff5cb91b"} Dec 03 10:55:49 crc kubenswrapper[4756]: I1203 10:55:49.122533 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b367bb21ce3703089dd895ff12b39d5fd6142108dd53ec855d899b32ff5cb91b" Dec 03 10:55:49 crc kubenswrapper[4756]: I1203 10:55:49.122750 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 10:55:49 crc kubenswrapper[4756]: I1203 10:55:49.820423 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:49 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:49 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:49 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:49 crc kubenswrapper[4756]: I1203 10:55:49.820546 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:50 crc kubenswrapper[4756]: I1203 10:55:50.820070 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:50 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:50 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:50 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:50 crc kubenswrapper[4756]: I1203 10:55:50.820149 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:51 crc kubenswrapper[4756]: I1203 10:55:51.828246 4756 patch_prober.go:28] interesting pod/router-default-5444994796-827dv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 10:55:51 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Dec 03 10:55:51 crc kubenswrapper[4756]: [+]process-running ok Dec 03 10:55:51 crc kubenswrapper[4756]: healthz check failed Dec 03 10:55:51 crc kubenswrapper[4756]: I1203 10:55:51.828339 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-827dv" podUID="c0e4abd5-fe26-4e86-b669-e1089fc6470f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 10:55:51 crc kubenswrapper[4756]: I1203 10:55:51.942826 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs\") pod \"network-metrics-daemon-qvt7n\" (UID: \"cd88c3db-a819-4fb9-a952-30dc1b67c375\") " pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:55:51 crc kubenswrapper[4756]: I1203 10:55:51.956816 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd88c3db-a819-4fb9-a952-30dc1b67c375-metrics-certs\") pod \"network-metrics-daemon-qvt7n\" (UID: \"cd88c3db-a819-4fb9-a952-30dc1b67c375\") " pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:55:52 crc kubenswrapper[4756]: I1203 10:55:52.189766 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qvt7n" Dec 03 10:55:52 crc kubenswrapper[4756]: I1203 10:55:52.607477 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:55:52 crc kubenswrapper[4756]: I1203 10:55:52.607913 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:55:52 crc kubenswrapper[4756]: I1203 10:55:52.820801 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:52 crc kubenswrapper[4756]: I1203 10:55:52.824223 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-827dv" Dec 03 10:55:58 crc kubenswrapper[4756]: I1203 10:55:58.255257 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:55:59 crc kubenswrapper[4756]: I1203 10:55:59.416439 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:55:59 crc kubenswrapper[4756]: I1203 10:55:59.424142 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 10:56:08 crc kubenswrapper[4756]: I1203 10:56:08.445263 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kl5rb" Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.685283 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 10:56:14 crc kubenswrapper[4756]: E1203 10:56:14.686217 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c748b701-65f2-4a2c-9f70-06e3bf04fc1d" containerName="pruner" Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.686238 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c748b701-65f2-4a2c-9f70-06e3bf04fc1d" containerName="pruner" Dec 03 10:56:14 crc kubenswrapper[4756]: E1203 10:56:14.686255 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84376a26-ffd1-4ec9-8a10-c164b064a67a" containerName="pruner" Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.686265 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="84376a26-ffd1-4ec9-8a10-c164b064a67a" containerName="pruner" Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.686433 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c748b701-65f2-4a2c-9f70-06e3bf04fc1d" containerName="pruner" Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.686457 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="84376a26-ffd1-4ec9-8a10-c164b064a67a" containerName="pruner" Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.687161 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.691175 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.691422 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.700894 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.829376 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a869529f-3f63-4fae-a11d-23cd1738d3a0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a869529f-3f63-4fae-a11d-23cd1738d3a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.830518 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a869529f-3f63-4fae-a11d-23cd1738d3a0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a869529f-3f63-4fae-a11d-23cd1738d3a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.931863 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a869529f-3f63-4fae-a11d-23cd1738d3a0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a869529f-3f63-4fae-a11d-23cd1738d3a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.931985 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a869529f-3f63-4fae-a11d-23cd1738d3a0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a869529f-3f63-4fae-a11d-23cd1738d3a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.932006 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a869529f-3f63-4fae-a11d-23cd1738d3a0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a869529f-3f63-4fae-a11d-23cd1738d3a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 10:56:14 crc kubenswrapper[4756]: I1203 10:56:14.966429 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a869529f-3f63-4fae-a11d-23cd1738d3a0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a869529f-3f63-4fae-a11d-23cd1738d3a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 10:56:15 crc kubenswrapper[4756]: I1203 10:56:15.023076 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 10:56:15 crc kubenswrapper[4756]: E1203 10:56:15.233807 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 03 10:56:15 crc kubenswrapper[4756]: E1203 10:56:15.234130 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vzw5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l688d_openshift-marketplace(89c85576-c4b8-4c93-8ce0-de9654286cff): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 10:56:15 crc kubenswrapper[4756]: E1203 10:56:15.235281 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-l688d" podUID="89c85576-c4b8-4c93-8ce0-de9654286cff" Dec 03 10:56:17 crc kubenswrapper[4756]: I1203 10:56:17.468849 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 10:56:18 crc kubenswrapper[4756]: E1203 10:56:18.415720 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l688d" podUID="89c85576-c4b8-4c93-8ce0-de9654286cff" Dec 03 10:56:18 crc kubenswrapper[4756]: E1203 10:56:18.536366 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 03 10:56:18 crc kubenswrapper[4756]: E1203 10:56:18.536605 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prxp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9zs5t_openshift-marketplace(1759c8af-db81-4841-a773-d8e3aaa6d9f2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 10:56:18 crc kubenswrapper[4756]: E1203 10:56:18.537794 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9zs5t" podUID="1759c8af-db81-4841-a773-d8e3aaa6d9f2" Dec 03 10:56:18 crc kubenswrapper[4756]: E1203 10:56:18.544602 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 03 10:56:18 crc kubenswrapper[4756]: E1203 10:56:18.544796 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8xj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ghq75_openshift-marketplace(37203e5a-c71e-4397-b96c-8f152834e488): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 10:56:18 crc kubenswrapper[4756]: E1203 10:56:18.546050 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ghq75" podUID="37203e5a-c71e-4397-b96c-8f152834e488" Dec 03 10:56:19 crc kubenswrapper[4756]: I1203 10:56:19.875836 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 10:56:19 crc kubenswrapper[4756]: I1203 10:56:19.878417 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 10:56:19 crc kubenswrapper[4756]: I1203 10:56:19.892141 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 10:56:19 crc kubenswrapper[4756]: I1203 10:56:19.907329 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-kube-api-access\") pod \"installer-9-crc\" (UID: \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 10:56:19 crc kubenswrapper[4756]: I1203 10:56:19.907389 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 10:56:19 crc kubenswrapper[4756]: I1203 10:56:19.907423 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-var-lock\") pod \"installer-9-crc\" (UID: \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 10:56:20 crc kubenswrapper[4756]: I1203 10:56:20.008564 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-kube-api-access\") pod \"installer-9-crc\" (UID: \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 10:56:20 crc kubenswrapper[4756]: I1203 10:56:20.008665 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 10:56:20 crc kubenswrapper[4756]: I1203 10:56:20.008722 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-var-lock\") pod \"installer-9-crc\" (UID: \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 10:56:20 crc kubenswrapper[4756]: I1203 10:56:20.008795 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 10:56:20 crc kubenswrapper[4756]: I1203 10:56:20.008824 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-var-lock\") pod \"installer-9-crc\" (UID: \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 10:56:20 crc kubenswrapper[4756]: E1203 10:56:20.019315 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9zs5t" podUID="1759c8af-db81-4841-a773-d8e3aaa6d9f2" Dec 03 10:56:20 crc kubenswrapper[4756]: E1203 10:56:20.019412 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ghq75" podUID="37203e5a-c71e-4397-b96c-8f152834e488" Dec 03 10:56:20 crc kubenswrapper[4756]: I1203 10:56:20.032089 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-kube-api-access\") pod \"installer-9-crc\" (UID: \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 10:56:20 crc kubenswrapper[4756]: E1203 10:56:20.094316 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 03 10:56:20 crc kubenswrapper[4756]: E1203 10:56:20.094759 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sschx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-82dpl_openshift-marketplace(35b0d395-c4ca-4956-9f3e-134814838598): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 10:56:20 crc kubenswrapper[4756]: E1203 10:56:20.095999 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-82dpl" podUID="35b0d395-c4ca-4956-9f3e-134814838598" Dec 03 10:56:20 crc kubenswrapper[4756]: E1203 10:56:20.126181 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 03 10:56:20 crc kubenswrapper[4756]: E1203 10:56:20.126369 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdh86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2cjlg_openshift-marketplace(c971a632-2e34-4783-bb0a-9e516fb8bdbd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 10:56:20 crc kubenswrapper[4756]: E1203 10:56:20.127548 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2cjlg" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" Dec 03 10:56:20 crc kubenswrapper[4756]: I1203 10:56:20.202377 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 10:56:21 crc kubenswrapper[4756]: E1203 10:56:21.982764 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-82dpl" podUID="35b0d395-c4ca-4956-9f3e-134814838598" Dec 03 10:56:21 crc kubenswrapper[4756]: E1203 10:56:21.983370 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2cjlg" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" Dec 03 10:56:22 crc kubenswrapper[4756]: E1203 10:56:22.233383 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 03 10:56:22 crc kubenswrapper[4756]: E1203 10:56:22.234849 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmh62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5zc78_openshift-marketplace(72f3b1cb-9933-4c78-ad7a-d27f873da187): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 10:56:22 crc kubenswrapper[4756]: E1203 10:56:22.237439 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5zc78" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" Dec 03 10:56:22 crc kubenswrapper[4756]: E1203 10:56:22.348773 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5zc78" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" Dec 03 10:56:22 crc kubenswrapper[4756]: I1203 10:56:22.453170 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qvt7n"] Dec 03 10:56:22 crc kubenswrapper[4756]: I1203 10:56:22.459076 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 10:56:22 crc kubenswrapper[4756]: W1203 10:56:22.468337 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd88c3db_a819_4fb9_a952_30dc1b67c375.slice/crio-d3503986c6a7688b293783f2395b14efd987f49084e96e8220f507b6ff9781f7 WatchSource:0}: Error finding container d3503986c6a7688b293783f2395b14efd987f49084e96e8220f507b6ff9781f7: Status 404 returned error can't find the container with id d3503986c6a7688b293783f2395b14efd987f49084e96e8220f507b6ff9781f7 Dec 03 10:56:22 crc kubenswrapper[4756]: E1203 10:56:22.488092 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 03 10:56:22 crc kubenswrapper[4756]: E1203 10:56:22.488684 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h26n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5r5m9_openshift-marketplace(361ccc2b-5e39-4a12-ae52-3926f47f097d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 10:56:22 crc kubenswrapper[4756]: E1203 10:56:22.490407 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5r5m9" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" Dec 03 10:56:22 crc kubenswrapper[4756]: E1203 10:56:22.539252 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 03 10:56:22 crc kubenswrapper[4756]: E1203 10:56:22.539469 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpw2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-972dg_openshift-marketplace(a4514e32-bd45-46cf-b849-42c2d941f777): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 10:56:22 crc kubenswrapper[4756]: E1203 10:56:22.540686 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-972dg" podUID="a4514e32-bd45-46cf-b849-42c2d941f777" Dec 03 10:56:22 crc kubenswrapper[4756]: I1203 10:56:22.543911 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 10:56:22 crc kubenswrapper[4756]: W1203 10:56:22.549069 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode24b32f1_10bc_4e1b_867a_edc2aab1a5ef.slice/crio-e265db6279c33ee799676e524e72711485d07d400bb28f7436d362b25180095a WatchSource:0}: Error finding container e265db6279c33ee799676e524e72711485d07d400bb28f7436d362b25180095a: Status 404 returned error can't find the container with id e265db6279c33ee799676e524e72711485d07d400bb28f7436d362b25180095a Dec 03 10:56:22 crc kubenswrapper[4756]: I1203 10:56:22.608917 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:56:22 crc kubenswrapper[4756]: I1203 10:56:22.609018 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:56:23 crc kubenswrapper[4756]: I1203 10:56:23.354143 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef","Type":"ContainerStarted","Data":"b5d108952d14e888eeb28660f4f104261afa0dbde01fa2fcafa52a8b24f263dd"} Dec 03 10:56:23 crc kubenswrapper[4756]: I1203 10:56:23.354522 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef","Type":"ContainerStarted","Data":"e265db6279c33ee799676e524e72711485d07d400bb28f7436d362b25180095a"} Dec 03 10:56:23 crc kubenswrapper[4756]: I1203 10:56:23.355946 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" event={"ID":"cd88c3db-a819-4fb9-a952-30dc1b67c375","Type":"ContainerStarted","Data":"275fae85a077eca4a1baeb77efc10f67725c719ba47d01818f0dac35942c0be6"} Dec 03 10:56:23 crc kubenswrapper[4756]: I1203 10:56:23.356011 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" event={"ID":"cd88c3db-a819-4fb9-a952-30dc1b67c375","Type":"ContainerStarted","Data":"a2dedd3cec76668966b6030997640df7d07f5177d18d0cacdba068745aa3c935"} Dec 03 10:56:23 crc kubenswrapper[4756]: I1203 10:56:23.356022 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qvt7n" event={"ID":"cd88c3db-a819-4fb9-a952-30dc1b67c375","Type":"ContainerStarted","Data":"d3503986c6a7688b293783f2395b14efd987f49084e96e8220f507b6ff9781f7"} Dec 03 10:56:23 crc kubenswrapper[4756]: I1203 10:56:23.359168 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a869529f-3f63-4fae-a11d-23cd1738d3a0","Type":"ContainerStarted","Data":"a21e1e2d3579056c238648d0f57139680a0d07e0be3c528871ef201d95d32e18"} Dec 03 10:56:23 crc kubenswrapper[4756]: I1203 10:56:23.359193 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a869529f-3f63-4fae-a11d-23cd1738d3a0","Type":"ContainerStarted","Data":"9f8784367e73e96f0f58519afff2b5b1ab0898d2e9c219270d74dedd9856bec1"} Dec 03 10:56:23 crc kubenswrapper[4756]: E1203 10:56:23.367036 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-972dg" podUID="a4514e32-bd45-46cf-b849-42c2d941f777" Dec 03 10:56:23 crc kubenswrapper[4756]: E1203 10:56:23.367335 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5r5m9" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" Dec 03 10:56:23 crc kubenswrapper[4756]: I1203 10:56:23.397112 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.397087924 podStartE2EDuration="4.397087924s" podCreationTimestamp="2025-12-03 10:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:56:23.375479903 +0000 UTC m=+194.405481147" watchObservedRunningTime="2025-12-03 10:56:23.397087924 +0000 UTC m=+194.427089188" Dec 03 10:56:23 crc kubenswrapper[4756]: I1203 10:56:23.401399 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qvt7n" podStartSLOduration=174.40138816 podStartE2EDuration="2m54.40138816s" podCreationTimestamp="2025-12-03 10:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:56:23.393781478 +0000 UTC m=+194.423782732" watchObservedRunningTime="2025-12-03 10:56:23.40138816 +0000 UTC m=+194.431389414" Dec 03 10:56:23 crc kubenswrapper[4756]: I1203 10:56:23.413551 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=9.413533249 podStartE2EDuration="9.413533249s" podCreationTimestamp="2025-12-03 10:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:56:23.410925936 +0000 UTC m=+194.440927210" watchObservedRunningTime="2025-12-03 10:56:23.413533249 +0000 UTC m=+194.443534503" Dec 03 10:56:24 crc kubenswrapper[4756]: I1203 10:56:24.372996 4756 generic.go:334] "Generic (PLEG): container finished" podID="a869529f-3f63-4fae-a11d-23cd1738d3a0" containerID="a21e1e2d3579056c238648d0f57139680a0d07e0be3c528871ef201d95d32e18" exitCode=0 Dec 03 10:56:24 crc kubenswrapper[4756]: I1203 10:56:24.373734 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a869529f-3f63-4fae-a11d-23cd1738d3a0","Type":"ContainerDied","Data":"a21e1e2d3579056c238648d0f57139680a0d07e0be3c528871ef201d95d32e18"} Dec 03 10:56:25 crc kubenswrapper[4756]: I1203 10:56:25.647387 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 10:56:25 crc kubenswrapper[4756]: I1203 10:56:25.699390 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dgdb"] Dec 03 10:56:25 crc kubenswrapper[4756]: I1203 10:56:25.804822 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a869529f-3f63-4fae-a11d-23cd1738d3a0-kubelet-dir\") pod \"a869529f-3f63-4fae-a11d-23cd1738d3a0\" (UID: \"a869529f-3f63-4fae-a11d-23cd1738d3a0\") " Dec 03 10:56:25 crc kubenswrapper[4756]: I1203 10:56:25.804981 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a869529f-3f63-4fae-a11d-23cd1738d3a0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a869529f-3f63-4fae-a11d-23cd1738d3a0" (UID: "a869529f-3f63-4fae-a11d-23cd1738d3a0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:56:25 crc kubenswrapper[4756]: I1203 10:56:25.805294 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a869529f-3f63-4fae-a11d-23cd1738d3a0-kube-api-access\") pod \"a869529f-3f63-4fae-a11d-23cd1738d3a0\" (UID: \"a869529f-3f63-4fae-a11d-23cd1738d3a0\") " Dec 03 10:56:25 crc kubenswrapper[4756]: I1203 10:56:25.805609 4756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a869529f-3f63-4fae-a11d-23cd1738d3a0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:25 crc kubenswrapper[4756]: I1203 10:56:25.811399 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a869529f-3f63-4fae-a11d-23cd1738d3a0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a869529f-3f63-4fae-a11d-23cd1738d3a0" (UID: "a869529f-3f63-4fae-a11d-23cd1738d3a0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:56:25 crc kubenswrapper[4756]: I1203 10:56:25.906557 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a869529f-3f63-4fae-a11d-23cd1738d3a0-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:26 crc kubenswrapper[4756]: I1203 10:56:26.387220 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a869529f-3f63-4fae-a11d-23cd1738d3a0","Type":"ContainerDied","Data":"9f8784367e73e96f0f58519afff2b5b1ab0898d2e9c219270d74dedd9856bec1"} Dec 03 10:56:26 crc kubenswrapper[4756]: I1203 10:56:26.387292 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f8784367e73e96f0f58519afff2b5b1ab0898d2e9c219270d74dedd9856bec1" Dec 03 10:56:26 crc kubenswrapper[4756]: I1203 10:56:26.387298 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 10:56:33 crc kubenswrapper[4756]: I1203 10:56:33.439507 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l688d" event={"ID":"89c85576-c4b8-4c93-8ce0-de9654286cff","Type":"ContainerStarted","Data":"b8eb38f334a2afead845d4de46ff808f50cb99fe80ab8dfd85a47324f861975e"} Dec 03 10:56:34 crc kubenswrapper[4756]: I1203 10:56:34.448428 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zs5t" event={"ID":"1759c8af-db81-4841-a773-d8e3aaa6d9f2","Type":"ContainerStarted","Data":"1a614dcc7cf169ace2ff5463d7abb9a2de7dab1ca1ae6a3d78f7578add5462fc"} Dec 03 10:56:34 crc kubenswrapper[4756]: I1203 10:56:34.452499 4756 generic.go:334] "Generic (PLEG): container finished" podID="89c85576-c4b8-4c93-8ce0-de9654286cff" containerID="b8eb38f334a2afead845d4de46ff808f50cb99fe80ab8dfd85a47324f861975e" exitCode=0 Dec 03 10:56:34 crc kubenswrapper[4756]: I1203 10:56:34.452560 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l688d" event={"ID":"89c85576-c4b8-4c93-8ce0-de9654286cff","Type":"ContainerDied","Data":"b8eb38f334a2afead845d4de46ff808f50cb99fe80ab8dfd85a47324f861975e"} Dec 03 10:56:35 crc kubenswrapper[4756]: I1203 10:56:35.460735 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l688d" event={"ID":"89c85576-c4b8-4c93-8ce0-de9654286cff","Type":"ContainerStarted","Data":"a544b4c215ab7fb16e1eef2de45472600ebf9edc755334ee7de831a7e3cdd1e6"} Dec 03 10:56:35 crc kubenswrapper[4756]: I1203 10:56:35.463093 4756 generic.go:334] "Generic (PLEG): container finished" podID="a4514e32-bd45-46cf-b849-42c2d941f777" containerID="491a651623ae7890afca11f840ee6044cdd4806988c6f72aa6c5c293e0030f07" exitCode=0 Dec 03 10:56:35 crc kubenswrapper[4756]: I1203 10:56:35.463154 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-972dg" event={"ID":"a4514e32-bd45-46cf-b849-42c2d941f777","Type":"ContainerDied","Data":"491a651623ae7890afca11f840ee6044cdd4806988c6f72aa6c5c293e0030f07"} Dec 03 10:56:35 crc kubenswrapper[4756]: I1203 10:56:35.467022 4756 generic.go:334] "Generic (PLEG): container finished" podID="1759c8af-db81-4841-a773-d8e3aaa6d9f2" containerID="1a614dcc7cf169ace2ff5463d7abb9a2de7dab1ca1ae6a3d78f7578add5462fc" exitCode=0 Dec 03 10:56:35 crc kubenswrapper[4756]: I1203 10:56:35.467059 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zs5t" event={"ID":"1759c8af-db81-4841-a773-d8e3aaa6d9f2","Type":"ContainerDied","Data":"1a614dcc7cf169ace2ff5463d7abb9a2de7dab1ca1ae6a3d78f7578add5462fc"} Dec 03 10:56:35 crc kubenswrapper[4756]: I1203 10:56:35.500601 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l688d" podStartSLOduration=4.754403649 podStartE2EDuration="1m3.500573445s" podCreationTimestamp="2025-12-03 10:55:32 +0000 UTC" firstStartedPulling="2025-12-03 10:55:36.291460407 +0000 UTC m=+147.321461651" lastFinishedPulling="2025-12-03 10:56:35.037630203 +0000 UTC m=+206.067631447" observedRunningTime="2025-12-03 10:56:35.492310811 +0000 UTC m=+206.522312045" watchObservedRunningTime="2025-12-03 10:56:35.500573445 +0000 UTC m=+206.530574689" Dec 03 10:56:36 crc kubenswrapper[4756]: I1203 10:56:36.475536 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghq75" event={"ID":"37203e5a-c71e-4397-b96c-8f152834e488","Type":"ContainerStarted","Data":"7382a6c1faad200c4a85ca7ce84a1f76205b4901ab531718524ab29057de1c57"} Dec 03 10:56:36 crc kubenswrapper[4756]: I1203 10:56:36.479489 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-972dg" event={"ID":"a4514e32-bd45-46cf-b849-42c2d941f777","Type":"ContainerStarted","Data":"abb4cd8b9761781d791d711ad9d8e912bc8d959082b6a14e266247836984bad9"} Dec 03 10:56:36 crc kubenswrapper[4756]: I1203 10:56:36.484234 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zs5t" event={"ID":"1759c8af-db81-4841-a773-d8e3aaa6d9f2","Type":"ContainerStarted","Data":"41481de92048d13956bcd9334b577f1a624a0dcb1a3e1b34f32c02c0a7122788"} Dec 03 10:56:36 crc kubenswrapper[4756]: I1203 10:56:36.487306 4756 generic.go:334] "Generic (PLEG): container finished" podID="72f3b1cb-9933-4c78-ad7a-d27f873da187" containerID="c489f4258c2a668bd8dfd78be266cb4bb589aa9e372e12fa91c5065dfe9b863a" exitCode=0 Dec 03 10:56:36 crc kubenswrapper[4756]: I1203 10:56:36.487386 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zc78" event={"ID":"72f3b1cb-9933-4c78-ad7a-d27f873da187","Type":"ContainerDied","Data":"c489f4258c2a668bd8dfd78be266cb4bb589aa9e372e12fa91c5065dfe9b863a"} Dec 03 10:56:36 crc kubenswrapper[4756]: I1203 10:56:36.528939 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9zs5t" podStartSLOduration=3.987451964 podStartE2EDuration="1m1.528908939s" podCreationTimestamp="2025-12-03 10:55:35 +0000 UTC" firstStartedPulling="2025-12-03 10:55:38.731164564 +0000 UTC m=+149.761165808" lastFinishedPulling="2025-12-03 10:56:36.272621539 +0000 UTC m=+207.302622783" observedRunningTime="2025-12-03 10:56:36.525236481 +0000 UTC m=+207.555237725" watchObservedRunningTime="2025-12-03 10:56:36.528908939 +0000 UTC m=+207.558910183" Dec 03 10:56:36 crc kubenswrapper[4756]: I1203 10:56:36.577032 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-972dg" podStartSLOduration=3.041795169 podStartE2EDuration="1m4.577012539s" podCreationTimestamp="2025-12-03 10:55:32 +0000 UTC" firstStartedPulling="2025-12-03 10:55:34.690510958 +0000 UTC m=+145.720512202" lastFinishedPulling="2025-12-03 10:56:36.225728328 +0000 UTC m=+207.255729572" observedRunningTime="2025-12-03 10:56:36.572576176 +0000 UTC m=+207.602577430" watchObservedRunningTime="2025-12-03 10:56:36.577012539 +0000 UTC m=+207.607013793" Dec 03 10:56:37 crc kubenswrapper[4756]: I1203 10:56:37.495908 4756 generic.go:334] "Generic (PLEG): container finished" podID="37203e5a-c71e-4397-b96c-8f152834e488" containerID="7382a6c1faad200c4a85ca7ce84a1f76205b4901ab531718524ab29057de1c57" exitCode=0 Dec 03 10:56:37 crc kubenswrapper[4756]: I1203 10:56:37.496004 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghq75" event={"ID":"37203e5a-c71e-4397-b96c-8f152834e488","Type":"ContainerDied","Data":"7382a6c1faad200c4a85ca7ce84a1f76205b4901ab531718524ab29057de1c57"} Dec 03 10:56:42 crc kubenswrapper[4756]: I1203 10:56:42.393965 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-972dg" Dec 03 10:56:42 crc kubenswrapper[4756]: I1203 10:56:42.395004 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-972dg" Dec 03 10:56:42 crc kubenswrapper[4756]: I1203 10:56:42.586676 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-972dg" Dec 03 10:56:42 crc kubenswrapper[4756]: I1203 10:56:42.636564 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-972dg" Dec 03 10:56:42 crc kubenswrapper[4756]: I1203 10:56:42.779082 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l688d" Dec 03 10:56:42 crc kubenswrapper[4756]: I1203 10:56:42.779160 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l688d" Dec 03 10:56:42 crc kubenswrapper[4756]: I1203 10:56:42.940238 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l688d" Dec 03 10:56:43 crc kubenswrapper[4756]: I1203 10:56:43.583067 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l688d" Dec 03 10:56:44 crc kubenswrapper[4756]: I1203 10:56:44.475997 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l688d"] Dec 03 10:56:45 crc kubenswrapper[4756]: I1203 10:56:45.551930 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l688d" podUID="89c85576-c4b8-4c93-8ce0-de9654286cff" containerName="registry-server" containerID="cri-o://a544b4c215ab7fb16e1eef2de45472600ebf9edc755334ee7de831a7e3cdd1e6" gracePeriod=2 Dec 03 10:56:45 crc kubenswrapper[4756]: I1203 10:56:45.749026 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:56:45 crc kubenswrapper[4756]: I1203 10:56:45.749108 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:56:45 crc kubenswrapper[4756]: I1203 10:56:45.792015 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:56:46 crc kubenswrapper[4756]: I1203 10:56:46.609751 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:56:48 crc kubenswrapper[4756]: I1203 10:56:48.571280 4756 generic.go:334] "Generic (PLEG): container finished" podID="89c85576-c4b8-4c93-8ce0-de9654286cff" containerID="a544b4c215ab7fb16e1eef2de45472600ebf9edc755334ee7de831a7e3cdd1e6" exitCode=0 Dec 03 10:56:48 crc kubenswrapper[4756]: I1203 10:56:48.571364 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l688d" event={"ID":"89c85576-c4b8-4c93-8ce0-de9654286cff","Type":"ContainerDied","Data":"a544b4c215ab7fb16e1eef2de45472600ebf9edc755334ee7de831a7e3cdd1e6"} Dec 03 10:56:50 crc kubenswrapper[4756]: I1203 10:56:50.743123 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" podUID="f8e68876-7e42-40f7-acc3-4cb527be5e06" containerName="oauth-openshift" containerID="cri-o://4d614578e2b88cb32a5acd01898f130cb85015bf20501f82254978e21751d52a" gracePeriod=15 Dec 03 10:56:51 crc kubenswrapper[4756]: I1203 10:56:51.609600 4756 generic.go:334] "Generic (PLEG): container finished" podID="f8e68876-7e42-40f7-acc3-4cb527be5e06" containerID="4d614578e2b88cb32a5acd01898f130cb85015bf20501f82254978e21751d52a" exitCode=0 Dec 03 10:56:51 crc kubenswrapper[4756]: I1203 10:56:51.609673 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" event={"ID":"f8e68876-7e42-40f7-acc3-4cb527be5e06","Type":"ContainerDied","Data":"4d614578e2b88cb32a5acd01898f130cb85015bf20501f82254978e21751d52a"} Dec 03 10:56:52 crc kubenswrapper[4756]: I1203 10:56:52.607204 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:56:52 crc kubenswrapper[4756]: I1203 10:56:52.607325 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:56:52 crc kubenswrapper[4756]: I1203 10:56:52.607469 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 10:56:52 crc kubenswrapper[4756]: I1203 10:56:52.608534 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 10:56:52 crc kubenswrapper[4756]: I1203 10:56:52.608737 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f" gracePeriod=600 Dec 03 10:56:52 crc kubenswrapper[4756]: E1203 10:56:52.779704 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a544b4c215ab7fb16e1eef2de45472600ebf9edc755334ee7de831a7e3cdd1e6 is running failed: container process not found" containerID="a544b4c215ab7fb16e1eef2de45472600ebf9edc755334ee7de831a7e3cdd1e6" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 10:56:52 crc kubenswrapper[4756]: E1203 10:56:52.781523 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a544b4c215ab7fb16e1eef2de45472600ebf9edc755334ee7de831a7e3cdd1e6 is running failed: container process not found" containerID="a544b4c215ab7fb16e1eef2de45472600ebf9edc755334ee7de831a7e3cdd1e6" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 10:56:52 crc kubenswrapper[4756]: E1203 10:56:52.782167 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a544b4c215ab7fb16e1eef2de45472600ebf9edc755334ee7de831a7e3cdd1e6 is running failed: container process not found" containerID="a544b4c215ab7fb16e1eef2de45472600ebf9edc755334ee7de831a7e3cdd1e6" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 10:56:52 crc kubenswrapper[4756]: E1203 10:56:52.782284 4756 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a544b4c215ab7fb16e1eef2de45472600ebf9edc755334ee7de831a7e3cdd1e6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-l688d" podUID="89c85576-c4b8-4c93-8ce0-de9654286cff" containerName="registry-server" Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.008910 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l688d" Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.121563 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzw5q\" (UniqueName: \"kubernetes.io/projected/89c85576-c4b8-4c93-8ce0-de9654286cff-kube-api-access-vzw5q\") pod \"89c85576-c4b8-4c93-8ce0-de9654286cff\" (UID: \"89c85576-c4b8-4c93-8ce0-de9654286cff\") " Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.121656 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c85576-c4b8-4c93-8ce0-de9654286cff-utilities\") pod \"89c85576-c4b8-4c93-8ce0-de9654286cff\" (UID: \"89c85576-c4b8-4c93-8ce0-de9654286cff\") " Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.121731 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c85576-c4b8-4c93-8ce0-de9654286cff-catalog-content\") pod \"89c85576-c4b8-4c93-8ce0-de9654286cff\" (UID: \"89c85576-c4b8-4c93-8ce0-de9654286cff\") " Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.126080 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89c85576-c4b8-4c93-8ce0-de9654286cff-utilities" (OuterVolumeSpecName: "utilities") pod "89c85576-c4b8-4c93-8ce0-de9654286cff" (UID: "89c85576-c4b8-4c93-8ce0-de9654286cff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.129274 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c85576-c4b8-4c93-8ce0-de9654286cff-kube-api-access-vzw5q" (OuterVolumeSpecName: "kube-api-access-vzw5q") pod "89c85576-c4b8-4c93-8ce0-de9654286cff" (UID: "89c85576-c4b8-4c93-8ce0-de9654286cff"). InnerVolumeSpecName "kube-api-access-vzw5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.182014 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89c85576-c4b8-4c93-8ce0-de9654286cff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89c85576-c4b8-4c93-8ce0-de9654286cff" (UID: "89c85576-c4b8-4c93-8ce0-de9654286cff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.223801 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzw5q\" (UniqueName: \"kubernetes.io/projected/89c85576-c4b8-4c93-8ce0-de9654286cff-kube-api-access-vzw5q\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.223851 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c85576-c4b8-4c93-8ce0-de9654286cff-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.223868 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c85576-c4b8-4c93-8ce0-de9654286cff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.628990 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l688d" event={"ID":"89c85576-c4b8-4c93-8ce0-de9654286cff","Type":"ContainerDied","Data":"09ee05cc79a263d7b91339430052c6c9a00be12e008ba4ad2a5e1237e1daeb78"} Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.629100 4756 scope.go:117] "RemoveContainer" containerID="a544b4c215ab7fb16e1eef2de45472600ebf9edc755334ee7de831a7e3cdd1e6" Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.629162 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l688d" Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.655522 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l688d"] Dec 03 10:56:53 crc kubenswrapper[4756]: I1203 10:56:53.657798 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l688d"] Dec 03 10:56:54 crc kubenswrapper[4756]: I1203 10:56:54.639334 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f" exitCode=0 Dec 03 10:56:54 crc kubenswrapper[4756]: I1203 10:56:54.639364 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f"} Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.243076 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89c85576-c4b8-4c93-8ce0-de9654286cff" path="/var/lib/kubelet/pods/89c85576-c4b8-4c93-8ce0-de9654286cff/volumes" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.254656 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.310336 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79558c6cc6-rlsz6"] Dec 03 10:56:55 crc kubenswrapper[4756]: E1203 10:56:55.310697 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a869529f-3f63-4fae-a11d-23cd1738d3a0" containerName="pruner" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.310713 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a869529f-3f63-4fae-a11d-23cd1738d3a0" containerName="pruner" Dec 03 10:56:55 crc kubenswrapper[4756]: E1203 10:56:55.310722 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c85576-c4b8-4c93-8ce0-de9654286cff" containerName="extract-content" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.310730 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c85576-c4b8-4c93-8ce0-de9654286cff" containerName="extract-content" Dec 03 10:56:55 crc kubenswrapper[4756]: E1203 10:56:55.310752 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c85576-c4b8-4c93-8ce0-de9654286cff" containerName="registry-server" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.310760 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c85576-c4b8-4c93-8ce0-de9654286cff" containerName="registry-server" Dec 03 10:56:55 crc kubenswrapper[4756]: E1203 10:56:55.310777 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c85576-c4b8-4c93-8ce0-de9654286cff" containerName="extract-utilities" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.310785 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c85576-c4b8-4c93-8ce0-de9654286cff" containerName="extract-utilities" Dec 03 10:56:55 crc kubenswrapper[4756]: E1203 10:56:55.310797 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e68876-7e42-40f7-acc3-4cb527be5e06" containerName="oauth-openshift" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.310804 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e68876-7e42-40f7-acc3-4cb527be5e06" containerName="oauth-openshift" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.310926 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e68876-7e42-40f7-acc3-4cb527be5e06" containerName="oauth-openshift" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.310940 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a869529f-3f63-4fae-a11d-23cd1738d3a0" containerName="pruner" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.311005 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c85576-c4b8-4c93-8ce0-de9654286cff" containerName="registry-server" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.311534 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.325011 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79558c6cc6-rlsz6"] Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.329298 4756 scope.go:117] "RemoveContainer" containerID="b8eb38f334a2afead845d4de46ff808f50cb99fe80ab8dfd85a47324f861975e" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.354527 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwbfg\" (UniqueName: \"kubernetes.io/projected/f8e68876-7e42-40f7-acc3-4cb527be5e06-kube-api-access-pwbfg\") pod \"f8e68876-7e42-40f7-acc3-4cb527be5e06\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.354598 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-serving-cert\") pod \"f8e68876-7e42-40f7-acc3-4cb527be5e06\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.354643 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-idp-0-file-data\") pod \"f8e68876-7e42-40f7-acc3-4cb527be5e06\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.354682 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-audit-policies\") pod \"f8e68876-7e42-40f7-acc3-4cb527be5e06\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.354768 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-session\") pod \"f8e68876-7e42-40f7-acc3-4cb527be5e06\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.354844 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-router-certs\") pod \"f8e68876-7e42-40f7-acc3-4cb527be5e06\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.354898 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f8e68876-7e42-40f7-acc3-4cb527be5e06-audit-dir\") pod \"f8e68876-7e42-40f7-acc3-4cb527be5e06\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.355030 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-trusted-ca-bundle\") pod \"f8e68876-7e42-40f7-acc3-4cb527be5e06\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.355083 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-provider-selection\") pod \"f8e68876-7e42-40f7-acc3-4cb527be5e06\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.355157 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-ocp-branding-template\") pod \"f8e68876-7e42-40f7-acc3-4cb527be5e06\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.355212 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-login\") pod \"f8e68876-7e42-40f7-acc3-4cb527be5e06\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.355259 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-cliconfig\") pod \"f8e68876-7e42-40f7-acc3-4cb527be5e06\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.355321 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-service-ca\") pod \"f8e68876-7e42-40f7-acc3-4cb527be5e06\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.355361 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-error\") pod \"f8e68876-7e42-40f7-acc3-4cb527be5e06\" (UID: \"f8e68876-7e42-40f7-acc3-4cb527be5e06\") " Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.355556 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8e68876-7e42-40f7-acc3-4cb527be5e06-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f8e68876-7e42-40f7-acc3-4cb527be5e06" (UID: "f8e68876-7e42-40f7-acc3-4cb527be5e06"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.355813 4756 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f8e68876-7e42-40f7-acc3-4cb527be5e06-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.357173 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f8e68876-7e42-40f7-acc3-4cb527be5e06" (UID: "f8e68876-7e42-40f7-acc3-4cb527be5e06"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.357894 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f8e68876-7e42-40f7-acc3-4cb527be5e06" (UID: "f8e68876-7e42-40f7-acc3-4cb527be5e06"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.361215 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f8e68876-7e42-40f7-acc3-4cb527be5e06" (UID: "f8e68876-7e42-40f7-acc3-4cb527be5e06"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.364215 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f8e68876-7e42-40f7-acc3-4cb527be5e06" (UID: "f8e68876-7e42-40f7-acc3-4cb527be5e06"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.364641 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e68876-7e42-40f7-acc3-4cb527be5e06-kube-api-access-pwbfg" (OuterVolumeSpecName: "kube-api-access-pwbfg") pod "f8e68876-7e42-40f7-acc3-4cb527be5e06" (UID: "f8e68876-7e42-40f7-acc3-4cb527be5e06"). InnerVolumeSpecName "kube-api-access-pwbfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.365218 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f8e68876-7e42-40f7-acc3-4cb527be5e06" (UID: "f8e68876-7e42-40f7-acc3-4cb527be5e06"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.367094 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f8e68876-7e42-40f7-acc3-4cb527be5e06" (UID: "f8e68876-7e42-40f7-acc3-4cb527be5e06"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.369098 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f8e68876-7e42-40f7-acc3-4cb527be5e06" (UID: "f8e68876-7e42-40f7-acc3-4cb527be5e06"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.369828 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f8e68876-7e42-40f7-acc3-4cb527be5e06" (UID: "f8e68876-7e42-40f7-acc3-4cb527be5e06"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.369978 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f8e68876-7e42-40f7-acc3-4cb527be5e06" (UID: "f8e68876-7e42-40f7-acc3-4cb527be5e06"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.371742 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f8e68876-7e42-40f7-acc3-4cb527be5e06" (UID: "f8e68876-7e42-40f7-acc3-4cb527be5e06"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.373769 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f8e68876-7e42-40f7-acc3-4cb527be5e06" (UID: "f8e68876-7e42-40f7-acc3-4cb527be5e06"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.374375 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f8e68876-7e42-40f7-acc3-4cb527be5e06" (UID: "f8e68876-7e42-40f7-acc3-4cb527be5e06"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.387107 4756 scope.go:117] "RemoveContainer" containerID="96e1b16c999e9b147402f934e362a5bb7b45653a2155bafd125a4e44b48975d3" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.456788 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1d15639-0081-4164-8bb9-19f1d3147f79-audit-dir\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.456851 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.456874 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5swh\" (UniqueName: \"kubernetes.io/projected/a1d15639-0081-4164-8bb9-19f1d3147f79-kube-api-access-v5swh\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.456891 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.456918 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.457257 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.457369 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-router-certs\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.457637 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-user-template-error\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.457771 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1d15639-0081-4164-8bb9-19f1d3147f79-audit-policies\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.457968 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.458099 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-service-ca\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.458206 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-session\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.458301 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-user-template-login\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.458403 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.458546 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.458621 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.458690 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.458824 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.458894 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.459005 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.459081 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.459147 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwbfg\" (UniqueName: \"kubernetes.io/projected/f8e68876-7e42-40f7-acc3-4cb527be5e06-kube-api-access-pwbfg\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.459252 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.459311 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.459483 4756 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f8e68876-7e42-40f7-acc3-4cb527be5e06-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.459547 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.459608 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f8e68876-7e42-40f7-acc3-4cb527be5e06-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.560371 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1d15639-0081-4164-8bb9-19f1d3147f79-audit-policies\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.560454 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.560485 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-service-ca\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.560516 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-session\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.560544 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-user-template-login\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.560574 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.560606 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1d15639-0081-4164-8bb9-19f1d3147f79-audit-dir\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.560638 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.560665 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5swh\" (UniqueName: \"kubernetes.io/projected/a1d15639-0081-4164-8bb9-19f1d3147f79-kube-api-access-v5swh\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.560690 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.560722 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.560762 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.560785 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-router-certs\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.560817 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-user-template-error\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.562451 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1d15639-0081-4164-8bb9-19f1d3147f79-audit-dir\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.564366 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.564564 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-service-ca\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.565089 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1d15639-0081-4164-8bb9-19f1d3147f79-audit-policies\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.565592 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.569868 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-user-template-error\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.569874 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.569934 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-router-certs\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.576334 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.577376 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.577445 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-user-template-login\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.577733 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-session\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.577892 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1d15639-0081-4164-8bb9-19f1d3147f79-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.588062 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5swh\" (UniqueName: \"kubernetes.io/projected/a1d15639-0081-4164-8bb9-19f1d3147f79-kube-api-access-v5swh\") pod \"oauth-openshift-79558c6cc6-rlsz6\" (UID: \"a1d15639-0081-4164-8bb9-19f1d3147f79\") " pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.640993 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.654561 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" event={"ID":"f8e68876-7e42-40f7-acc3-4cb527be5e06","Type":"ContainerDied","Data":"b5ec3636f362a13ecec9e94e02a7d48a9ae15bac7623bdc8379c1904bff7a5d6"} Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.654632 4756 scope.go:117] "RemoveContainer" containerID="4d614578e2b88cb32a5acd01898f130cb85015bf20501f82254978e21751d52a" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.654745 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6dgdb" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.660365 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82dpl" event={"ID":"35b0d395-c4ca-4956-9f3e-134814838598","Type":"ContainerStarted","Data":"e838cbd6766b2c99560969729ae44daa737392a7df896669e4932f36912fc987"} Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.672579 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghq75" event={"ID":"37203e5a-c71e-4397-b96c-8f152834e488","Type":"ContainerStarted","Data":"c54803b650b98d17e1f0bfcf56b5c06d97c1d82b8b62a4875e7b4542ebf5ceea"} Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.676357 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5r5m9" event={"ID":"361ccc2b-5e39-4a12-ae52-3926f47f097d","Type":"ContainerStarted","Data":"8d3c65453d66d887e9b950b40b7e3ecb06cb5c08b46f8c5e285bac30543a8c7f"} Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.704240 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"c9ee7eef0aeb97ea95a86f3145a817bc46a4709e501b1caee8826aa807385845"} Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.724480 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2cjlg" event={"ID":"c971a632-2e34-4783-bb0a-9e516fb8bdbd","Type":"ContainerStarted","Data":"99effb663575bc04506533f8c67e018488e85d036215ed4970fd165da87e52be"} Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.742803 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ghq75" podStartSLOduration=4.214568459 podStartE2EDuration="1m20.742786376s" podCreationTimestamp="2025-12-03 10:55:35 +0000 UTC" firstStartedPulling="2025-12-03 10:55:38.801377448 +0000 UTC m=+149.831378692" lastFinishedPulling="2025-12-03 10:56:55.329595325 +0000 UTC m=+226.359596609" observedRunningTime="2025-12-03 10:56:55.715998679 +0000 UTC m=+226.745999943" watchObservedRunningTime="2025-12-03 10:56:55.742786376 +0000 UTC m=+226.772787620" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.750283 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zc78" event={"ID":"72f3b1cb-9933-4c78-ad7a-d27f873da187","Type":"ContainerStarted","Data":"ebf7579b2e64885f300160716f39de69a6bc1fe066e53ece5b68f94af6bf95d1"} Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.813181 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dgdb"] Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.815684 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6dgdb"] Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.875651 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:56:55 crc kubenswrapper[4756]: I1203 10:56:55.875704 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:56:56 crc kubenswrapper[4756]: I1203 10:56:56.171664 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5zc78" podStartSLOduration=4.346233426 podStartE2EDuration="1m22.171635138s" podCreationTimestamp="2025-12-03 10:55:34 +0000 UTC" firstStartedPulling="2025-12-03 10:55:37.491961022 +0000 UTC m=+148.521962266" lastFinishedPulling="2025-12-03 10:56:55.317362724 +0000 UTC m=+226.347363978" observedRunningTime="2025-12-03 10:56:55.858508679 +0000 UTC m=+226.888509913" watchObservedRunningTime="2025-12-03 10:56:56.171635138 +0000 UTC m=+227.201636382" Dec 03 10:56:56 crc kubenswrapper[4756]: I1203 10:56:56.173810 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79558c6cc6-rlsz6"] Dec 03 10:56:56 crc kubenswrapper[4756]: W1203 10:56:56.181344 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1d15639_0081_4164_8bb9_19f1d3147f79.slice/crio-6bd186839271ccfcaef1756f46839f532ca02fdba030589eb9329fceb77fed3d WatchSource:0}: Error finding container 6bd186839271ccfcaef1756f46839f532ca02fdba030589eb9329fceb77fed3d: Status 404 returned error can't find the container with id 6bd186839271ccfcaef1756f46839f532ca02fdba030589eb9329fceb77fed3d Dec 03 10:56:56 crc kubenswrapper[4756]: I1203 10:56:56.758805 4756 generic.go:334] "Generic (PLEG): container finished" podID="361ccc2b-5e39-4a12-ae52-3926f47f097d" containerID="8d3c65453d66d887e9b950b40b7e3ecb06cb5c08b46f8c5e285bac30543a8c7f" exitCode=0 Dec 03 10:56:56 crc kubenswrapper[4756]: I1203 10:56:56.758891 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5r5m9" event={"ID":"361ccc2b-5e39-4a12-ae52-3926f47f097d","Type":"ContainerDied","Data":"8d3c65453d66d887e9b950b40b7e3ecb06cb5c08b46f8c5e285bac30543a8c7f"} Dec 03 10:56:56 crc kubenswrapper[4756]: I1203 10:56:56.761566 4756 generic.go:334] "Generic (PLEG): container finished" podID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" containerID="99effb663575bc04506533f8c67e018488e85d036215ed4970fd165da87e52be" exitCode=0 Dec 03 10:56:56 crc kubenswrapper[4756]: I1203 10:56:56.761640 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2cjlg" event={"ID":"c971a632-2e34-4783-bb0a-9e516fb8bdbd","Type":"ContainerDied","Data":"99effb663575bc04506533f8c67e018488e85d036215ed4970fd165da87e52be"} Dec 03 10:56:56 crc kubenswrapper[4756]: I1203 10:56:56.764317 4756 generic.go:334] "Generic (PLEG): container finished" podID="35b0d395-c4ca-4956-9f3e-134814838598" containerID="e838cbd6766b2c99560969729ae44daa737392a7df896669e4932f36912fc987" exitCode=0 Dec 03 10:56:56 crc kubenswrapper[4756]: I1203 10:56:56.764495 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82dpl" event={"ID":"35b0d395-c4ca-4956-9f3e-134814838598","Type":"ContainerDied","Data":"e838cbd6766b2c99560969729ae44daa737392a7df896669e4932f36912fc987"} Dec 03 10:56:56 crc kubenswrapper[4756]: I1203 10:56:56.768332 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" event={"ID":"a1d15639-0081-4164-8bb9-19f1d3147f79","Type":"ContainerStarted","Data":"1fa20a73a5b3e808aacbc2decad94c219ba5b659a44bad8dfb68fd077a392c09"} Dec 03 10:56:56 crc kubenswrapper[4756]: I1203 10:56:56.768415 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" event={"ID":"a1d15639-0081-4164-8bb9-19f1d3147f79","Type":"ContainerStarted","Data":"6bd186839271ccfcaef1756f46839f532ca02fdba030589eb9329fceb77fed3d"} Dec 03 10:56:56 crc kubenswrapper[4756]: I1203 10:56:56.833104 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" podStartSLOduration=31.833072281 podStartE2EDuration="31.833072281s" podCreationTimestamp="2025-12-03 10:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:56:56.829130976 +0000 UTC m=+227.859132240" watchObservedRunningTime="2025-12-03 10:56:56.833072281 +0000 UTC m=+227.863073535" Dec 03 10:56:56 crc kubenswrapper[4756]: I1203 10:56:56.914935 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ghq75" podUID="37203e5a-c71e-4397-b96c-8f152834e488" containerName="registry-server" probeResult="failure" output=< Dec 03 10:56:56 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Dec 03 10:56:56 crc kubenswrapper[4756]: > Dec 03 10:56:57 crc kubenswrapper[4756]: I1203 10:56:57.243901 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e68876-7e42-40f7-acc3-4cb527be5e06" path="/var/lib/kubelet/pods/f8e68876-7e42-40f7-acc3-4cb527be5e06/volumes" Dec 03 10:56:57 crc kubenswrapper[4756]: I1203 10:56:57.776890 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2cjlg" event={"ID":"c971a632-2e34-4783-bb0a-9e516fb8bdbd","Type":"ContainerStarted","Data":"c54f8384f5219529485e29e4d28268c95c4ca10c8df75fbd3c3b3a17705cad55"} Dec 03 10:56:57 crc kubenswrapper[4756]: I1203 10:56:57.779869 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82dpl" event={"ID":"35b0d395-c4ca-4956-9f3e-134814838598","Type":"ContainerStarted","Data":"51d89103908260a8c16c9eecc810331441e149265193c48cf640249e782aee9f"} Dec 03 10:56:57 crc kubenswrapper[4756]: I1203 10:56:57.782377 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5r5m9" event={"ID":"361ccc2b-5e39-4a12-ae52-3926f47f097d","Type":"ContainerStarted","Data":"25aa9d61519277f71e2276ce680f0ca9a597bcf012da4e9eb16df40300683ce4"} Dec 03 10:56:57 crc kubenswrapper[4756]: I1203 10:56:57.782554 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:57 crc kubenswrapper[4756]: I1203 10:56:57.793657 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79558c6cc6-rlsz6" Dec 03 10:56:57 crc kubenswrapper[4756]: I1203 10:56:57.803977 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2cjlg" podStartSLOduration=3.37481163 podStartE2EDuration="1m26.803942447s" podCreationTimestamp="2025-12-03 10:55:31 +0000 UTC" firstStartedPulling="2025-12-03 10:55:33.965019538 +0000 UTC m=+144.995020782" lastFinishedPulling="2025-12-03 10:56:57.394150345 +0000 UTC m=+228.424151599" observedRunningTime="2025-12-03 10:56:57.801417436 +0000 UTC m=+228.831418680" watchObservedRunningTime="2025-12-03 10:56:57.803942447 +0000 UTC m=+228.833943691" Dec 03 10:56:57 crc kubenswrapper[4756]: I1203 10:56:57.824257 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5r5m9" podStartSLOduration=4.168891448 podStartE2EDuration="1m23.824238616s" podCreationTimestamp="2025-12-03 10:55:34 +0000 UTC" firstStartedPulling="2025-12-03 10:55:37.592943609 +0000 UTC m=+148.622944853" lastFinishedPulling="2025-12-03 10:56:57.248290777 +0000 UTC m=+228.278292021" observedRunningTime="2025-12-03 10:56:57.822255973 +0000 UTC m=+228.852257207" watchObservedRunningTime="2025-12-03 10:56:57.824238616 +0000 UTC m=+228.854239860" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.618634 4756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.619895 4756 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.619936 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.620428 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb" gracePeriod=15 Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.620458 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80" gracePeriod=15 Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.620487 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb" gracePeriod=15 Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.620539 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8" gracePeriod=15 Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.620600 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842" gracePeriod=15 Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.622135 4756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 10:57:00 crc kubenswrapper[4756]: E1203 10:57:00.622477 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.622492 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 10:57:00 crc kubenswrapper[4756]: E1203 10:57:00.622500 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.622507 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 10:57:00 crc kubenswrapper[4756]: E1203 10:57:00.622520 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.622527 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 10:57:00 crc kubenswrapper[4756]: E1203 10:57:00.622535 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.622541 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 03 10:57:00 crc kubenswrapper[4756]: E1203 10:57:00.622556 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.622562 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 10:57:00 crc kubenswrapper[4756]: E1203 10:57:00.622576 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.622582 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.622698 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.622709 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.622721 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.622740 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.622747 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 10:57:00 crc kubenswrapper[4756]: E1203 10:57:00.622851 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.622859 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.622983 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.644634 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.645390 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.645437 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.645489 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.645520 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.645577 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.645624 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.645872 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.671047 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-82dpl" podStartSLOduration=5.961162355 podStartE2EDuration="1m28.671023115s" podCreationTimestamp="2025-12-03 10:55:32 +0000 UTC" firstStartedPulling="2025-12-03 10:55:34.566232887 +0000 UTC m=+145.596234131" lastFinishedPulling="2025-12-03 10:56:57.276093637 +0000 UTC m=+228.306094891" observedRunningTime="2025-12-03 10:56:57.884886367 +0000 UTC m=+228.914887611" watchObservedRunningTime="2025-12-03 10:57:00.671023115 +0000 UTC m=+231.701024359" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.672936 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.746707 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.746782 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.746820 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.746879 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.746905 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.746923 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.746967 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.746992 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.747075 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.747115 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.747136 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.747158 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.747177 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.747198 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.747219 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.747242 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.801384 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.802660 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.803306 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb" exitCode=0 Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.803332 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80" exitCode=0 Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.803340 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8" exitCode=0 Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.803353 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842" exitCode=2 Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.803394 4756 scope.go:117] "RemoveContainer" containerID="ea43b3d689635f7b7223d1c0d6d44b3d8bb00a3127c41262f15906144a6b628a" Dec 03 10:57:00 crc kubenswrapper[4756]: I1203 10:57:00.969993 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:01 crc kubenswrapper[4756]: E1203 10:57:01.003046 4756 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187daf57e2d6cc4f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 10:57:01.001845839 +0000 UTC m=+232.031847083,LastTimestamp:2025-12-03 10:57:01.001845839 +0000 UTC m=+232.031847083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 10:57:01 crc kubenswrapper[4756]: I1203 10:57:01.821756 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 10:57:01 crc kubenswrapper[4756]: I1203 10:57:01.826562 4756 generic.go:334] "Generic (PLEG): container finished" podID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" containerID="b5d108952d14e888eeb28660f4f104261afa0dbde01fa2fcafa52a8b24f263dd" exitCode=0 Dec 03 10:57:01 crc kubenswrapper[4756]: I1203 10:57:01.826643 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef","Type":"ContainerDied","Data":"b5d108952d14e888eeb28660f4f104261afa0dbde01fa2fcafa52a8b24f263dd"} Dec 03 10:57:01 crc kubenswrapper[4756]: I1203 10:57:01.827438 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:01 crc kubenswrapper[4756]: I1203 10:57:01.827622 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:01 crc kubenswrapper[4756]: I1203 10:57:01.829090 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7f491430feae1b80277753fd8b10b6bf9c4ca01cbd2d304e93603a179d283c2c"} Dec 03 10:57:01 crc kubenswrapper[4756]: I1203 10:57:01.829120 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"cecfea9dcbca73669fbae3e016becb012b021957d616687af4edf67d1494568b"} Dec 03 10:57:01 crc kubenswrapper[4756]: I1203 10:57:01.829609 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:01 crc kubenswrapper[4756]: I1203 10:57:01.830038 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.292063 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.292395 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.338922 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.340035 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.340810 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.341736 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.601919 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.602051 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.672391 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.673277 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.673760 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.674393 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.674964 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.900274 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.902110 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.902505 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.902943 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.903163 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.906830 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.907204 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.908439 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.908755 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:02 crc kubenswrapper[4756]: I1203 10:57:02.908996 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.098512 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.099712 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.100585 4756 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.101099 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.101574 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.101855 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.104057 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.142731 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.144318 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.144857 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.145338 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.145995 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.146294 4756 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: E1203 10:57:03.197996 4756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: E1203 10:57:03.198546 4756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: E1203 10:57:03.199211 4756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: E1203 10:57:03.199908 4756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: E1203 10:57:03.200443 4756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.200489 4756 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 03 10:57:03 crc kubenswrapper[4756]: E1203 10:57:03.200881 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="200ms" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.287274 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-kube-api-access\") pod \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\" (UID: \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\") " Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.287408 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.287506 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.287585 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.287664 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.287785 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-kubelet-dir\") pod \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\" (UID: \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\") " Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.287862 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" (UID: "e24b32f1-10bc-4e1b-867a-edc2aab1a5ef"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.288217 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-var-lock\") pod \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\" (UID: \"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef\") " Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.288379 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.288615 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-var-lock" (OuterVolumeSpecName: "var-lock") pod "e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" (UID: "e24b32f1-10bc-4e1b-867a-edc2aab1a5ef"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.288933 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.289046 4756 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.289082 4756 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.289101 4756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.289117 4756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.298165 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" (UID: "e24b32f1-10bc-4e1b-867a-edc2aab1a5ef"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.390542 4756 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.390611 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e24b32f1-10bc-4e1b-867a-edc2aab1a5ef-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 10:57:03 crc kubenswrapper[4756]: E1203 10:57:03.402798 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="400ms" Dec 03 10:57:03 crc kubenswrapper[4756]: E1203 10:57:03.804058 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="800ms" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.855404 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.857196 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb" exitCode=0 Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.857307 4756 scope.go:117] "RemoveContainer" containerID="75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.857377 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.858169 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.858771 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.859587 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.859919 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.860631 4756 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.863117 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e24b32f1-10bc-4e1b-867a-edc2aab1a5ef","Type":"ContainerDied","Data":"e265db6279c33ee799676e524e72711485d07d400bb28f7436d362b25180095a"} Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.863237 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.863246 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e265db6279c33ee799676e524e72711485d07d400bb28f7436d362b25180095a" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.881078 4756 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.882240 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.882471 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.884646 4756 scope.go:117] "RemoveContainer" containerID="0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.885191 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.885531 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.885980 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.886421 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.886900 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.887363 4756 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.888905 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.906106 4756 scope.go:117] "RemoveContainer" containerID="e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.926062 4756 scope.go:117] "RemoveContainer" containerID="737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.946368 4756 scope.go:117] "RemoveContainer" containerID="ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.972820 4756 scope.go:117] "RemoveContainer" containerID="fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.996829 4756 scope.go:117] "RemoveContainer" containerID="75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb" Dec 03 10:57:03 crc kubenswrapper[4756]: E1203 10:57:03.998655 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\": container with ID starting with 75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb not found: ID does not exist" containerID="75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.998719 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb"} err="failed to get container status \"75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\": rpc error: code = NotFound desc = could not find container \"75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb\": container with ID starting with 75a00e2e78b63661a8c128d2dd4853fb89cbe3b72fae01c2647bd9c9e837eadb not found: ID does not exist" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.998757 4756 scope.go:117] "RemoveContainer" containerID="0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80" Dec 03 10:57:03 crc kubenswrapper[4756]: E1203 10:57:03.999497 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\": container with ID starting with 0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80 not found: ID does not exist" containerID="0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.999555 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80"} err="failed to get container status \"0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\": rpc error: code = NotFound desc = could not find container \"0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80\": container with ID starting with 0098fa32510bdd8adcf77c7a4bee88bd3f1c1a08ec482ad0ea92eb1694134a80 not found: ID does not exist" Dec 03 10:57:03 crc kubenswrapper[4756]: I1203 10:57:03.999595 4756 scope.go:117] "RemoveContainer" containerID="e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8" Dec 03 10:57:04 crc kubenswrapper[4756]: E1203 10:57:04.001236 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\": container with ID starting with e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8 not found: ID does not exist" containerID="e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.001293 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8"} err="failed to get container status \"e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\": rpc error: code = NotFound desc = could not find container \"e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8\": container with ID starting with e695eebf616b50a082c59994414e464ced5062ec21f7ef85527d05d1301aa8a8 not found: ID does not exist" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.001332 4756 scope.go:117] "RemoveContainer" containerID="737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842" Dec 03 10:57:04 crc kubenswrapper[4756]: E1203 10:57:04.001713 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\": container with ID starting with 737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842 not found: ID does not exist" containerID="737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.001768 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842"} err="failed to get container status \"737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\": rpc error: code = NotFound desc = could not find container \"737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842\": container with ID starting with 737073d779eeb995a5dc1e5ed16acde1efead0bd76b67fba7834d08c18f4a842 not found: ID does not exist" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.001798 4756 scope.go:117] "RemoveContainer" containerID="ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb" Dec 03 10:57:04 crc kubenswrapper[4756]: E1203 10:57:04.002224 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\": container with ID starting with ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb not found: ID does not exist" containerID="ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.002253 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb"} err="failed to get container status \"ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\": rpc error: code = NotFound desc = could not find container \"ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb\": container with ID starting with ca4c8d4d0a85b689aed62cc2795649109c06e9b34797a0130abf11c359a1d2fb not found: ID does not exist" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.002268 4756 scope.go:117] "RemoveContainer" containerID="fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270" Dec 03 10:57:04 crc kubenswrapper[4756]: E1203 10:57:04.002824 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\": container with ID starting with fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270 not found: ID does not exist" containerID="fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.002858 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270"} err="failed to get container status \"fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\": rpc error: code = NotFound desc = could not find container \"fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270\": container with ID starting with fb01b93fe4851400b83231763e8283d2522e08e2ff355e6b7b6bcfbf2e218270 not found: ID does not exist" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.574206 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.574301 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:57:04 crc kubenswrapper[4756]: E1203 10:57:04.605263 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="1.6s" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.630225 4756 status_manager.go:851] "Failed to get status for pod" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" pod="openshift-marketplace/redhat-marketplace-5r5m9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5r5m9\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.630809 4756 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.631470 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.631783 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.632121 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.632560 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.629447 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.916132 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.917181 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.917822 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.918377 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.918754 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.918862 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.918862 4756 status_manager.go:851] "Failed to get status for pod" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" pod="openshift-marketplace/redhat-marketplace-5r5m9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5r5m9\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.919285 4756 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.919666 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.960303 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.961169 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.961644 4756 status_manager.go:851] "Failed to get status for pod" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" pod="openshift-marketplace/redhat-marketplace-5r5m9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5r5m9\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.961945 4756 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.962267 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.962524 4756 status_manager.go:851] "Failed to get status for pod" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" pod="openshift-marketplace/redhat-marketplace-5zc78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5zc78\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.962762 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:04 crc kubenswrapper[4756]: I1203 10:57:04.963051 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.241664 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.922154 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.922233 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.922918 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.923658 4756 status_manager.go:851] "Failed to get status for pod" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" pod="openshift-marketplace/redhat-marketplace-5zc78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5zc78\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.924013 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.924197 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.924358 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.924504 4756 status_manager.go:851] "Failed to get status for pod" podUID="37203e5a-c71e-4397-b96c-8f152834e488" pod="openshift-marketplace/redhat-operators-ghq75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ghq75\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.924665 4756 status_manager.go:851] "Failed to get status for pod" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" pod="openshift-marketplace/redhat-marketplace-5r5m9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5r5m9\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.924921 4756 status_manager.go:851] "Failed to get status for pod" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" pod="openshift-marketplace/redhat-marketplace-5r5m9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5r5m9\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.925111 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.925261 4756 status_manager.go:851] "Failed to get status for pod" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" pod="openshift-marketplace/redhat-marketplace-5zc78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5zc78\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.925413 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.925733 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.925918 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.926084 4756 status_manager.go:851] "Failed to get status for pod" podUID="37203e5a-c71e-4397-b96c-8f152834e488" pod="openshift-marketplace/redhat-operators-ghq75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ghq75\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.967625 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.968395 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.969181 4756 status_manager.go:851] "Failed to get status for pod" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" pod="openshift-marketplace/redhat-marketplace-5zc78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5zc78\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.969698 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.970031 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.970312 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.970626 4756 status_manager.go:851] "Failed to get status for pod" podUID="37203e5a-c71e-4397-b96c-8f152834e488" pod="openshift-marketplace/redhat-operators-ghq75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ghq75\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:05 crc kubenswrapper[4756]: I1203 10:57:05.970898 4756 status_manager.go:851] "Failed to get status for pod" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" pod="openshift-marketplace/redhat-marketplace-5r5m9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5r5m9\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:06 crc kubenswrapper[4756]: E1203 10:57:06.206074 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="3.2s" Dec 03 10:57:07 crc kubenswrapper[4756]: E1203 10:57:07.574392 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:57:07Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:57:07Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:57:07Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T10:57:07Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:07 crc kubenswrapper[4756]: E1203 10:57:07.574940 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:07 crc kubenswrapper[4756]: E1203 10:57:07.575336 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:07 crc kubenswrapper[4756]: E1203 10:57:07.576005 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:07 crc kubenswrapper[4756]: E1203 10:57:07.576741 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:07 crc kubenswrapper[4756]: E1203 10:57:07.576768 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 10:57:09 crc kubenswrapper[4756]: I1203 10:57:09.237353 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:09 crc kubenswrapper[4756]: I1203 10:57:09.239285 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:09 crc kubenswrapper[4756]: I1203 10:57:09.239883 4756 status_manager.go:851] "Failed to get status for pod" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" pod="openshift-marketplace/redhat-marketplace-5zc78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5zc78\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:09 crc kubenswrapper[4756]: I1203 10:57:09.240686 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:09 crc kubenswrapper[4756]: I1203 10:57:09.241859 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:09 crc kubenswrapper[4756]: I1203 10:57:09.242679 4756 status_manager.go:851] "Failed to get status for pod" podUID="37203e5a-c71e-4397-b96c-8f152834e488" pod="openshift-marketplace/redhat-operators-ghq75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ghq75\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:09 crc kubenswrapper[4756]: I1203 10:57:09.243341 4756 status_manager.go:851] "Failed to get status for pod" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" pod="openshift-marketplace/redhat-marketplace-5r5m9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5r5m9\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:09 crc kubenswrapper[4756]: E1203 10:57:09.407806 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="6.4s" Dec 03 10:57:10 crc kubenswrapper[4756]: E1203 10:57:10.766694 4756 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187daf57e2d6cc4f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 10:57:01.001845839 +0000 UTC m=+232.031847083,LastTimestamp:2025-12-03 10:57:01.001845839 +0000 UTC m=+232.031847083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.233344 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.234946 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.235763 4756 status_manager.go:851] "Failed to get status for pod" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" pod="openshift-marketplace/redhat-marketplace-5zc78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5zc78\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.236352 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.236720 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.237176 4756 status_manager.go:851] "Failed to get status for pod" podUID="37203e5a-c71e-4397-b96c-8f152834e488" pod="openshift-marketplace/redhat-operators-ghq75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ghq75\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.237599 4756 status_manager.go:851] "Failed to get status for pod" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" pod="openshift-marketplace/redhat-marketplace-5r5m9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5r5m9\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.238029 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.261510 4756 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2e21b3a-5bdf-47a2-9d78-4614ec42ca25" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.261558 4756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2e21b3a-5bdf-47a2-9d78-4614ec42ca25" Dec 03 10:57:14 crc kubenswrapper[4756]: E1203 10:57:14.262233 4756 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.262864 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:14 crc kubenswrapper[4756]: W1203 10:57:14.296320 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-5902697b3fa11e6dec183758dc8e035e713bf63b551bedbe6fc6424a98720908 WatchSource:0}: Error finding container 5902697b3fa11e6dec183758dc8e035e713bf63b551bedbe6fc6424a98720908: Status 404 returned error can't find the container with id 5902697b3fa11e6dec183758dc8e035e713bf63b551bedbe6fc6424a98720908 Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.934661 4756 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="08e55a85ae9a99b94b172c70a3588a54ef598b944c701d24a827b92628a817ec" exitCode=0 Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.934746 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"08e55a85ae9a99b94b172c70a3588a54ef598b944c701d24a827b92628a817ec"} Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.935871 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5902697b3fa11e6dec183758dc8e035e713bf63b551bedbe6fc6424a98720908"} Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.936424 4756 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2e21b3a-5bdf-47a2-9d78-4614ec42ca25" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.936463 4756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2e21b3a-5bdf-47a2-9d78-4614ec42ca25" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.937566 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: E1203 10:57:14.937600 4756 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.938212 4756 status_manager.go:851] "Failed to get status for pod" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" pod="openshift-marketplace/redhat-marketplace-5zc78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5zc78\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.938655 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.940330 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.940816 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.941157 4756 status_manager.go:851] "Failed to get status for pod" podUID="37203e5a-c71e-4397-b96c-8f152834e488" pod="openshift-marketplace/redhat-operators-ghq75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ghq75\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.941500 4756 status_manager.go:851] "Failed to get status for pod" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" pod="openshift-marketplace/redhat-marketplace-5r5m9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5r5m9\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.943081 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.943126 4756 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754" exitCode=1 Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.943153 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754"} Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.943560 4756 scope.go:117] "RemoveContainer" containerID="3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.944425 4756 status_manager.go:851] "Failed to get status for pod" podUID="35b0d395-c4ca-4956-9f3e-134814838598" pod="openshift-marketplace/certified-operators-82dpl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-82dpl\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.945155 4756 status_manager.go:851] "Failed to get status for pod" podUID="37203e5a-c71e-4397-b96c-8f152834e488" pod="openshift-marketplace/redhat-operators-ghq75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ghq75\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.945497 4756 status_manager.go:851] "Failed to get status for pod" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" pod="openshift-marketplace/redhat-marketplace-5r5m9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5r5m9\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.945732 4756 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.946038 4756 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.946385 4756 status_manager.go:851] "Failed to get status for pod" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" pod="openshift-marketplace/redhat-marketplace-5zc78" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5zc78\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.946767 4756 status_manager.go:851] "Failed to get status for pod" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:14 crc kubenswrapper[4756]: I1203 10:57:14.947136 4756 status_manager.go:851] "Failed to get status for pod" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" pod="openshift-marketplace/certified-operators-2cjlg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2cjlg\": dial tcp 38.102.83.233:6443: connect: connection refused" Dec 03 10:57:15 crc kubenswrapper[4756]: I1203 10:57:15.272681 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:57:15 crc kubenswrapper[4756]: I1203 10:57:15.953725 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f01f9402d5d4d59a5a3704af9bff19be5f1a84f36c9fcd76f48fa4881123eb39"} Dec 03 10:57:15 crc kubenswrapper[4756]: I1203 10:57:15.953793 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d7d37147f3060b66893cf03cbe71f6dbfb5568fb320d666e297cf47510f68c0e"} Dec 03 10:57:15 crc kubenswrapper[4756]: I1203 10:57:15.958631 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 10:57:15 crc kubenswrapper[4756]: I1203 10:57:15.958708 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c24ffee1bcb084b3f4b36bc41ad425fa3e86960bb734d3d8e7030adae1ede992"} Dec 03 10:57:16 crc kubenswrapper[4756]: I1203 10:57:16.057551 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:57:16 crc kubenswrapper[4756]: I1203 10:57:16.968649 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"daf232a0f757a133dd127978f92c5cc28067de6bbe2abf42f7fc77c50db208a7"} Dec 03 10:57:16 crc kubenswrapper[4756]: I1203 10:57:16.968717 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dab3c08d196ecc43118e393a4055fab27be8a98027fb74449a5aff762b41ed3c"} Dec 03 10:57:16 crc kubenswrapper[4756]: I1203 10:57:16.968739 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5acc66ebc71156288bfd5455ebdb4589aaedb54b79d840c728cdce5c2db2d8c8"} Dec 03 10:57:16 crc kubenswrapper[4756]: I1203 10:57:16.968931 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:16 crc kubenswrapper[4756]: I1203 10:57:16.968979 4756 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2e21b3a-5bdf-47a2-9d78-4614ec42ca25" Dec 03 10:57:16 crc kubenswrapper[4756]: I1203 10:57:16.969002 4756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2e21b3a-5bdf-47a2-9d78-4614ec42ca25" Dec 03 10:57:19 crc kubenswrapper[4756]: I1203 10:57:19.263667 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:19 crc kubenswrapper[4756]: I1203 10:57:19.264102 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:19 crc kubenswrapper[4756]: I1203 10:57:19.273762 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:22 crc kubenswrapper[4756]: I1203 10:57:22.022795 4756 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:22 crc kubenswrapper[4756]: I1203 10:57:22.218331 4756 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f570ea67-c06a-4131-af29-e8cca9a8791b" Dec 03 10:57:22 crc kubenswrapper[4756]: I1203 10:57:22.440982 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:57:22 crc kubenswrapper[4756]: I1203 10:57:22.441403 4756 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 03 10:57:22 crc kubenswrapper[4756]: I1203 10:57:22.441501 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 03 10:57:23 crc kubenswrapper[4756]: I1203 10:57:23.007489 4756 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2e21b3a-5bdf-47a2-9d78-4614ec42ca25" Dec 03 10:57:23 crc kubenswrapper[4756]: I1203 10:57:23.007525 4756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2e21b3a-5bdf-47a2-9d78-4614ec42ca25" Dec 03 10:57:23 crc kubenswrapper[4756]: I1203 10:57:23.011754 4756 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f570ea67-c06a-4131-af29-e8cca9a8791b" Dec 03 10:57:23 crc kubenswrapper[4756]: I1203 10:57:23.013512 4756 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://d7d37147f3060b66893cf03cbe71f6dbfb5568fb320d666e297cf47510f68c0e" Dec 03 10:57:23 crc kubenswrapper[4756]: I1203 10:57:23.013551 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:24 crc kubenswrapper[4756]: I1203 10:57:24.014790 4756 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2e21b3a-5bdf-47a2-9d78-4614ec42ca25" Dec 03 10:57:24 crc kubenswrapper[4756]: I1203 10:57:24.014829 4756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a2e21b3a-5bdf-47a2-9d78-4614ec42ca25" Dec 03 10:57:24 crc kubenswrapper[4756]: I1203 10:57:24.019187 4756 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f570ea67-c06a-4131-af29-e8cca9a8791b" Dec 03 10:57:31 crc kubenswrapper[4756]: I1203 10:57:31.614709 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 10:57:31 crc kubenswrapper[4756]: I1203 10:57:31.864864 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 10:57:31 crc kubenswrapper[4756]: I1203 10:57:31.867743 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 03 10:57:31 crc kubenswrapper[4756]: I1203 10:57:31.903762 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 10:57:32 crc kubenswrapper[4756]: I1203 10:57:32.248020 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 10:57:32 crc kubenswrapper[4756]: I1203 10:57:32.287168 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 10:57:32 crc kubenswrapper[4756]: I1203 10:57:32.440844 4756 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 03 10:57:32 crc kubenswrapper[4756]: I1203 10:57:32.441591 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 03 10:57:32 crc kubenswrapper[4756]: I1203 10:57:32.494115 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 10:57:32 crc kubenswrapper[4756]: I1203 10:57:32.906183 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 10:57:32 crc kubenswrapper[4756]: I1203 10:57:32.949042 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 03 10:57:33 crc kubenswrapper[4756]: I1203 10:57:33.171622 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 10:57:33 crc kubenswrapper[4756]: I1203 10:57:33.324607 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 10:57:34 crc kubenswrapper[4756]: I1203 10:57:34.096514 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 03 10:57:34 crc kubenswrapper[4756]: I1203 10:57:34.147638 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 10:57:34 crc kubenswrapper[4756]: I1203 10:57:34.155572 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 10:57:34 crc kubenswrapper[4756]: I1203 10:57:34.474874 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 10:57:34 crc kubenswrapper[4756]: I1203 10:57:34.484627 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 10:57:34 crc kubenswrapper[4756]: I1203 10:57:34.760863 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 10:57:34 crc kubenswrapper[4756]: I1203 10:57:34.853475 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 10:57:34 crc kubenswrapper[4756]: I1203 10:57:34.859474 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 10:57:34 crc kubenswrapper[4756]: I1203 10:57:34.922844 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 03 10:57:34 crc kubenswrapper[4756]: I1203 10:57:34.932425 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 10:57:34 crc kubenswrapper[4756]: I1203 10:57:34.963777 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 10:57:35 crc kubenswrapper[4756]: I1203 10:57:35.137148 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 10:57:35 crc kubenswrapper[4756]: I1203 10:57:35.278383 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 10:57:35 crc kubenswrapper[4756]: I1203 10:57:35.589514 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 10:57:35 crc kubenswrapper[4756]: I1203 10:57:35.593533 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 10:57:35 crc kubenswrapper[4756]: I1203 10:57:35.651730 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 03 10:57:35 crc kubenswrapper[4756]: I1203 10:57:35.733518 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 10:57:35 crc kubenswrapper[4756]: I1203 10:57:35.758423 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 10:57:35 crc kubenswrapper[4756]: I1203 10:57:35.811996 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 10:57:35 crc kubenswrapper[4756]: I1203 10:57:35.813914 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 10:57:35 crc kubenswrapper[4756]: I1203 10:57:35.821062 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 03 10:57:35 crc kubenswrapper[4756]: I1203 10:57:35.855107 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 10:57:35 crc kubenswrapper[4756]: I1203 10:57:35.976045 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 10:57:35 crc kubenswrapper[4756]: I1203 10:57:35.981414 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.047296 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.047871 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.047886 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.121236 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.191324 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.239325 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.246685 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.318311 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.390389 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.438507 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.444215 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.637792 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.749003 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.758335 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.770017 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.787434 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.831710 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.846736 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.880129 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.887071 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.907584 4756 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.911352 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.911329493 podStartE2EDuration="36.911329493s" podCreationTimestamp="2025-12-03 10:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:57:22.135673318 +0000 UTC m=+253.165674552" watchObservedRunningTime="2025-12-03 10:57:36.911329493 +0000 UTC m=+267.941330737" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.912703 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.912753 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.918110 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.932809 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.932191246 podStartE2EDuration="14.932191246s" podCreationTimestamp="2025-12-03 10:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:57:36.930190362 +0000 UTC m=+267.960191616" watchObservedRunningTime="2025-12-03 10:57:36.932191246 +0000 UTC m=+267.962192500" Dec 03 10:57:36 crc kubenswrapper[4756]: I1203 10:57:36.962815 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 10:57:37 crc kubenswrapper[4756]: I1203 10:57:37.039598 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 10:57:37 crc kubenswrapper[4756]: I1203 10:57:37.073057 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 10:57:37 crc kubenswrapper[4756]: I1203 10:57:37.175494 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 10:57:37 crc kubenswrapper[4756]: I1203 10:57:37.181095 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 03 10:57:37 crc kubenswrapper[4756]: I1203 10:57:37.192744 4756 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 03 10:57:37 crc kubenswrapper[4756]: I1203 10:57:37.518673 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 10:57:37 crc kubenswrapper[4756]: I1203 10:57:37.626826 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 03 10:57:37 crc kubenswrapper[4756]: I1203 10:57:37.706464 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 10:57:37 crc kubenswrapper[4756]: I1203 10:57:37.760231 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 10:57:37 crc kubenswrapper[4756]: I1203 10:57:37.847763 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 03 10:57:37 crc kubenswrapper[4756]: I1203 10:57:37.849332 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 10:57:37 crc kubenswrapper[4756]: I1203 10:57:37.856744 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 03 10:57:37 crc kubenswrapper[4756]: I1203 10:57:37.876004 4756 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 10:57:37 crc kubenswrapper[4756]: I1203 10:57:37.879669 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.011332 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.045610 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.142008 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.178180 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.240340 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.255714 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.261069 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.401791 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.419716 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.462115 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.540779 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.583165 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.619732 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.640939 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.683182 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.684009 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.684491 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.694043 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.730810 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.758437 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.766667 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.833674 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.887504 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.909614 4756 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.909644 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.911677 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 10:57:38 crc kubenswrapper[4756]: I1203 10:57:38.989680 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.086984 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.259821 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.269181 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.363455 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.425239 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.442536 4756 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.448813 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.465135 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.479708 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.508216 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.572023 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.646283 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.726525 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.779072 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.849851 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.887771 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.895203 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 10:57:39 crc kubenswrapper[4756]: I1203 10:57:39.906636 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.046591 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.064536 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.076196 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.108051 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.122433 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.187914 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.237907 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.370713 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.445237 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.495433 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.568365 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.726154 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.744685 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.811759 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.825006 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.875990 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.905584 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.905878 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 10:57:40 crc kubenswrapper[4756]: I1203 10:57:40.954532 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.082223 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.099918 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.188940 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.202474 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.243667 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.268444 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.385927 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.395336 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.431621 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.447069 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.577339 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.603298 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.652355 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.652504 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.779398 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.810608 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.840684 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.860572 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.881867 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.903853 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.915198 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.939587 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 10:57:41 crc kubenswrapper[4756]: I1203 10:57:41.972345 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.013558 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.039027 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.098842 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.104501 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.330234 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.394703 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.409578 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.423715 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.440535 4756 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.440633 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.440714 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.441598 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"c24ffee1bcb084b3f4b36bc41ad425fa3e86960bb734d3d8e7030adae1ede992"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.441725 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://c24ffee1bcb084b3f4b36bc41ad425fa3e86960bb734d3d8e7030adae1ede992" gracePeriod=30 Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.474159 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.493279 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.521810 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.540815 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.543911 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.658890 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.684001 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.700918 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.706382 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.777075 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.859288 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.886479 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 10:57:42 crc kubenswrapper[4756]: I1203 10:57:42.994644 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.053613 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.053739 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.126176 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.189043 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.338743 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.470274 4756 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.470387 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.470625 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.471264 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7f491430feae1b80277753fd8b10b6bf9c4ca01cbd2d304e93603a179d283c2c" gracePeriod=5 Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.538676 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.560032 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.676439 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.789934 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.795003 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.849537 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.856430 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 10:57:43 crc kubenswrapper[4756]: I1203 10:57:43.880854 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 10:57:44 crc kubenswrapper[4756]: I1203 10:57:44.094650 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 10:57:44 crc kubenswrapper[4756]: I1203 10:57:44.117741 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 03 10:57:44 crc kubenswrapper[4756]: I1203 10:57:44.198097 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 10:57:44 crc kubenswrapper[4756]: I1203 10:57:44.201424 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 10:57:44 crc kubenswrapper[4756]: I1203 10:57:44.221565 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 10:57:44 crc kubenswrapper[4756]: I1203 10:57:44.242414 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 10:57:44 crc kubenswrapper[4756]: I1203 10:57:44.288855 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 10:57:44 crc kubenswrapper[4756]: I1203 10:57:44.335354 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 10:57:44 crc kubenswrapper[4756]: I1203 10:57:44.358229 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 10:57:44 crc kubenswrapper[4756]: I1203 10:57:44.508775 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 10:57:44 crc kubenswrapper[4756]: I1203 10:57:44.645837 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 10:57:44 crc kubenswrapper[4756]: I1203 10:57:44.663090 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 10:57:44 crc kubenswrapper[4756]: I1203 10:57:44.664234 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 10:57:44 crc kubenswrapper[4756]: I1203 10:57:44.886415 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.097806 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.098025 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.123506 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.123925 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.144926 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.192147 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.369176 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.415685 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.438242 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.485501 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.706205 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.733391 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.751102 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.790435 4756 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 10:57:45 crc kubenswrapper[4756]: I1203 10:57:45.901943 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 10:57:46 crc kubenswrapper[4756]: I1203 10:57:46.038608 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 10:57:46 crc kubenswrapper[4756]: I1203 10:57:46.146702 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 10:57:46 crc kubenswrapper[4756]: I1203 10:57:46.228139 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 03 10:57:46 crc kubenswrapper[4756]: I1203 10:57:46.270478 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 10:57:46 crc kubenswrapper[4756]: I1203 10:57:46.445829 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 10:57:46 crc kubenswrapper[4756]: I1203 10:57:46.460523 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 10:57:46 crc kubenswrapper[4756]: I1203 10:57:46.464734 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 10:57:46 crc kubenswrapper[4756]: I1203 10:57:46.528097 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 03 10:57:46 crc kubenswrapper[4756]: I1203 10:57:46.553709 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 10:57:46 crc kubenswrapper[4756]: I1203 10:57:46.678082 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 10:57:46 crc kubenswrapper[4756]: I1203 10:57:46.786150 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 10:57:46 crc kubenswrapper[4756]: I1203 10:57:46.843050 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 10:57:46 crc kubenswrapper[4756]: I1203 10:57:46.914545 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 03 10:57:47 crc kubenswrapper[4756]: I1203 10:57:47.127210 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 10:57:47 crc kubenswrapper[4756]: I1203 10:57:47.147168 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 10:57:47 crc kubenswrapper[4756]: I1203 10:57:47.257149 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 10:57:47 crc kubenswrapper[4756]: I1203 10:57:47.259350 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 10:57:47 crc kubenswrapper[4756]: I1203 10:57:47.344139 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 10:57:47 crc kubenswrapper[4756]: I1203 10:57:47.508027 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 10:57:47 crc kubenswrapper[4756]: I1203 10:57:47.953349 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 10:57:48 crc kubenswrapper[4756]: I1203 10:57:48.042249 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 10:57:48 crc kubenswrapper[4756]: I1203 10:57:48.174182 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 10:57:48 crc kubenswrapper[4756]: I1203 10:57:48.523824 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 03 10:57:48 crc kubenswrapper[4756]: I1203 10:57:48.566306 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 10:57:48 crc kubenswrapper[4756]: I1203 10:57:48.911065 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 03 10:57:48 crc kubenswrapper[4756]: I1203 10:57:48.926507 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 10:57:48 crc kubenswrapper[4756]: I1203 10:57:48.956061 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.668199 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.712328 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.712570 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.900982 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.901073 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.901168 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.901191 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.901213 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.901285 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.901401 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.901445 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.901644 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.901933 4756 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.901985 4756 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.901999 4756 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.902011 4756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 10:57:49 crc kubenswrapper[4756]: I1203 10:57:49.911850 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 10:57:50 crc kubenswrapper[4756]: I1203 10:57:50.003008 4756 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 10:57:50 crc kubenswrapper[4756]: I1203 10:57:50.206998 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 10:57:50 crc kubenswrapper[4756]: I1203 10:57:50.207056 4756 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7f491430feae1b80277753fd8b10b6bf9c4ca01cbd2d304e93603a179d283c2c" exitCode=137 Dec 03 10:57:50 crc kubenswrapper[4756]: I1203 10:57:50.207109 4756 scope.go:117] "RemoveContainer" containerID="7f491430feae1b80277753fd8b10b6bf9c4ca01cbd2d304e93603a179d283c2c" Dec 03 10:57:50 crc kubenswrapper[4756]: I1203 10:57:50.207263 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 10:57:50 crc kubenswrapper[4756]: I1203 10:57:50.236368 4756 scope.go:117] "RemoveContainer" containerID="7f491430feae1b80277753fd8b10b6bf9c4ca01cbd2d304e93603a179d283c2c" Dec 03 10:57:50 crc kubenswrapper[4756]: E1203 10:57:50.237100 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f491430feae1b80277753fd8b10b6bf9c4ca01cbd2d304e93603a179d283c2c\": container with ID starting with 7f491430feae1b80277753fd8b10b6bf9c4ca01cbd2d304e93603a179d283c2c not found: ID does not exist" containerID="7f491430feae1b80277753fd8b10b6bf9c4ca01cbd2d304e93603a179d283c2c" Dec 03 10:57:50 crc kubenswrapper[4756]: I1203 10:57:50.237146 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f491430feae1b80277753fd8b10b6bf9c4ca01cbd2d304e93603a179d283c2c"} err="failed to get container status \"7f491430feae1b80277753fd8b10b6bf9c4ca01cbd2d304e93603a179d283c2c\": rpc error: code = NotFound desc = could not find container \"7f491430feae1b80277753fd8b10b6bf9c4ca01cbd2d304e93603a179d283c2c\": container with ID starting with 7f491430feae1b80277753fd8b10b6bf9c4ca01cbd2d304e93603a179d283c2c not found: ID does not exist" Dec 03 10:57:51 crc kubenswrapper[4756]: I1203 10:57:51.241797 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 03 10:57:52 crc kubenswrapper[4756]: I1203 10:57:51.242090 4756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 03 10:57:52 crc kubenswrapper[4756]: I1203 10:57:51.253544 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 10:57:52 crc kubenswrapper[4756]: I1203 10:57:51.254000 4756 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="fff0b435-2948-464d-b57e-9d54194b975d" Dec 03 10:57:52 crc kubenswrapper[4756]: I1203 10:57:51.256628 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 10:57:52 crc kubenswrapper[4756]: I1203 10:57:51.256695 4756 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="fff0b435-2948-464d-b57e-9d54194b975d" Dec 03 10:57:52 crc kubenswrapper[4756]: I1203 10:57:51.277489 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 10:58:07 crc kubenswrapper[4756]: I1203 10:58:07.321306 4756 generic.go:334] "Generic (PLEG): container finished" podID="d6cddd35-757a-487a-afb5-d75d73224aee" containerID="2e4a02a3ec77872668d7c1ba4c8fd250926b509a8cf30093f1510434e7cb8534" exitCode=0 Dec 03 10:58:07 crc kubenswrapper[4756]: I1203 10:58:07.321407 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" event={"ID":"d6cddd35-757a-487a-afb5-d75d73224aee","Type":"ContainerDied","Data":"2e4a02a3ec77872668d7c1ba4c8fd250926b509a8cf30093f1510434e7cb8534"} Dec 03 10:58:07 crc kubenswrapper[4756]: I1203 10:58:07.324455 4756 scope.go:117] "RemoveContainer" containerID="2e4a02a3ec77872668d7c1ba4c8fd250926b509a8cf30093f1510434e7cb8534" Dec 03 10:58:08 crc kubenswrapper[4756]: I1203 10:58:08.333895 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" event={"ID":"d6cddd35-757a-487a-afb5-d75d73224aee","Type":"ContainerStarted","Data":"9edfa61f84a5cf7d6a42baab61887d6d5c2b8705ebbb79acdb7456209cef2f05"} Dec 03 10:58:08 crc kubenswrapper[4756]: I1203 10:58:08.335198 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:58:08 crc kubenswrapper[4756]: I1203 10:58:08.338184 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:58:09 crc kubenswrapper[4756]: I1203 10:58:09.103566 4756 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 03 10:58:13 crc kubenswrapper[4756]: I1203 10:58:13.365497 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 03 10:58:13 crc kubenswrapper[4756]: I1203 10:58:13.368824 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 10:58:13 crc kubenswrapper[4756]: I1203 10:58:13.368882 4756 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c24ffee1bcb084b3f4b36bc41ad425fa3e86960bb734d3d8e7030adae1ede992" exitCode=137 Dec 03 10:58:13 crc kubenswrapper[4756]: I1203 10:58:13.368926 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c24ffee1bcb084b3f4b36bc41ad425fa3e86960bb734d3d8e7030adae1ede992"} Dec 03 10:58:13 crc kubenswrapper[4756]: I1203 10:58:13.368975 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"26a88a01e820b72fbcb53a6fb1c1aa94248465c2195277de3a277ea4659c4cef"} Dec 03 10:58:13 crc kubenswrapper[4756]: I1203 10:58:13.368995 4756 scope.go:117] "RemoveContainer" containerID="3fcbbb2be23511ed82efb12ee9240fd969db054919911bff80fa53b043095754" Dec 03 10:58:14 crc kubenswrapper[4756]: I1203 10:58:14.379715 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 03 10:58:16 crc kubenswrapper[4756]: I1203 10:58:16.058406 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:58:22 crc kubenswrapper[4756]: I1203 10:58:22.440218 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:58:22 crc kubenswrapper[4756]: I1203 10:58:22.446440 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:58:23 crc kubenswrapper[4756]: I1203 10:58:23.441254 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 10:58:29 crc kubenswrapper[4756]: I1203 10:58:29.596098 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv"] Dec 03 10:58:29 crc kubenswrapper[4756]: I1203 10:58:29.596711 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" podUID="1541db70-51f2-4236-854e-6ec0f8fa3010" containerName="route-controller-manager" containerID="cri-o://97872f5bab91b3d39a7083de5cd24107f6e1379b286aa8cbbc3ffc0515c4b5ed" gracePeriod=30 Dec 03 10:58:29 crc kubenswrapper[4756]: I1203 10:58:29.606099 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t4rfh"] Dec 03 10:58:29 crc kubenswrapper[4756]: I1203 10:58:29.607300 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" podUID="7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad" containerName="controller-manager" containerID="cri-o://ddf93b80ff0fa1ac18fbea0208b1ebf67ce3ea9d1005ba1a6d03eea5727dc312" gracePeriod=30 Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.053801 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.058106 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.197968 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w57b6\" (UniqueName: \"kubernetes.io/projected/1541db70-51f2-4236-854e-6ec0f8fa3010-kube-api-access-w57b6\") pod \"1541db70-51f2-4236-854e-6ec0f8fa3010\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.198058 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1541db70-51f2-4236-854e-6ec0f8fa3010-client-ca\") pod \"1541db70-51f2-4236-854e-6ec0f8fa3010\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.198088 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-config\") pod \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.198154 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1541db70-51f2-4236-854e-6ec0f8fa3010-config\") pod \"1541db70-51f2-4236-854e-6ec0f8fa3010\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.198373 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-serving-cert\") pod \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.198494 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-client-ca\") pod \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.198541 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-proxy-ca-bundles\") pod \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.198578 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1541db70-51f2-4236-854e-6ec0f8fa3010-serving-cert\") pod \"1541db70-51f2-4236-854e-6ec0f8fa3010\" (UID: \"1541db70-51f2-4236-854e-6ec0f8fa3010\") " Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.198642 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2qhz\" (UniqueName: \"kubernetes.io/projected/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-kube-api-access-f2qhz\") pod \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\" (UID: \"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad\") " Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.199584 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1541db70-51f2-4236-854e-6ec0f8fa3010-client-ca" (OuterVolumeSpecName: "client-ca") pod "1541db70-51f2-4236-854e-6ec0f8fa3010" (UID: "1541db70-51f2-4236-854e-6ec0f8fa3010"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.199601 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad" (UID: "7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.199674 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad" (UID: "7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.199740 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1541db70-51f2-4236-854e-6ec0f8fa3010-config" (OuterVolumeSpecName: "config") pod "1541db70-51f2-4236-854e-6ec0f8fa3010" (UID: "1541db70-51f2-4236-854e-6ec0f8fa3010"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.199770 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-config" (OuterVolumeSpecName: "config") pod "7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad" (UID: "7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.207563 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1541db70-51f2-4236-854e-6ec0f8fa3010-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1541db70-51f2-4236-854e-6ec0f8fa3010" (UID: "1541db70-51f2-4236-854e-6ec0f8fa3010"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.207612 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad" (UID: "7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.208185 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1541db70-51f2-4236-854e-6ec0f8fa3010-kube-api-access-w57b6" (OuterVolumeSpecName: "kube-api-access-w57b6") pod "1541db70-51f2-4236-854e-6ec0f8fa3010" (UID: "1541db70-51f2-4236-854e-6ec0f8fa3010"). InnerVolumeSpecName "kube-api-access-w57b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.208369 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-kube-api-access-f2qhz" (OuterVolumeSpecName: "kube-api-access-f2qhz") pod "7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad" (UID: "7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad"). InnerVolumeSpecName "kube-api-access-f2qhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.300464 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2qhz\" (UniqueName: \"kubernetes.io/projected/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-kube-api-access-f2qhz\") on node \"crc\" DevicePath \"\"" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.300516 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w57b6\" (UniqueName: \"kubernetes.io/projected/1541db70-51f2-4236-854e-6ec0f8fa3010-kube-api-access-w57b6\") on node \"crc\" DevicePath \"\"" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.300541 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1541db70-51f2-4236-854e-6ec0f8fa3010-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.300555 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.300570 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1541db70-51f2-4236-854e-6ec0f8fa3010-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.300584 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.300595 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.300608 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.300620 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1541db70-51f2-4236-854e-6ec0f8fa3010-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.482360 4756 generic.go:334] "Generic (PLEG): container finished" podID="1541db70-51f2-4236-854e-6ec0f8fa3010" containerID="97872f5bab91b3d39a7083de5cd24107f6e1379b286aa8cbbc3ffc0515c4b5ed" exitCode=0 Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.482433 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.482463 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" event={"ID":"1541db70-51f2-4236-854e-6ec0f8fa3010","Type":"ContainerDied","Data":"97872f5bab91b3d39a7083de5cd24107f6e1379b286aa8cbbc3ffc0515c4b5ed"} Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.482503 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv" event={"ID":"1541db70-51f2-4236-854e-6ec0f8fa3010","Type":"ContainerDied","Data":"224215c7da5b93915bfb2a3adc215443ed141b651341533f77dc68b4e4cc1bfd"} Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.482528 4756 scope.go:117] "RemoveContainer" containerID="97872f5bab91b3d39a7083de5cd24107f6e1379b286aa8cbbc3ffc0515c4b5ed" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.485011 4756 generic.go:334] "Generic (PLEG): container finished" podID="7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad" containerID="ddf93b80ff0fa1ac18fbea0208b1ebf67ce3ea9d1005ba1a6d03eea5727dc312" exitCode=0 Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.485086 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.485092 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" event={"ID":"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad","Type":"ContainerDied","Data":"ddf93b80ff0fa1ac18fbea0208b1ebf67ce3ea9d1005ba1a6d03eea5727dc312"} Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.485248 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t4rfh" event={"ID":"7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad","Type":"ContainerDied","Data":"4dcaf59768693bd5908606f47536b2c41d4fa6cd9de27a13514a7720e30aa2b9"} Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.505171 4756 scope.go:117] "RemoveContainer" containerID="97872f5bab91b3d39a7083de5cd24107f6e1379b286aa8cbbc3ffc0515c4b5ed" Dec 03 10:58:30 crc kubenswrapper[4756]: E1203 10:58:30.505675 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97872f5bab91b3d39a7083de5cd24107f6e1379b286aa8cbbc3ffc0515c4b5ed\": container with ID starting with 97872f5bab91b3d39a7083de5cd24107f6e1379b286aa8cbbc3ffc0515c4b5ed not found: ID does not exist" containerID="97872f5bab91b3d39a7083de5cd24107f6e1379b286aa8cbbc3ffc0515c4b5ed" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.505757 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97872f5bab91b3d39a7083de5cd24107f6e1379b286aa8cbbc3ffc0515c4b5ed"} err="failed to get container status \"97872f5bab91b3d39a7083de5cd24107f6e1379b286aa8cbbc3ffc0515c4b5ed\": rpc error: code = NotFound desc = could not find container \"97872f5bab91b3d39a7083de5cd24107f6e1379b286aa8cbbc3ffc0515c4b5ed\": container with ID starting with 97872f5bab91b3d39a7083de5cd24107f6e1379b286aa8cbbc3ffc0515c4b5ed not found: ID does not exist" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.505794 4756 scope.go:117] "RemoveContainer" containerID="ddf93b80ff0fa1ac18fbea0208b1ebf67ce3ea9d1005ba1a6d03eea5727dc312" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.525926 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv"] Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.529206 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpxxv"] Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.532838 4756 scope.go:117] "RemoveContainer" containerID="ddf93b80ff0fa1ac18fbea0208b1ebf67ce3ea9d1005ba1a6d03eea5727dc312" Dec 03 10:58:30 crc kubenswrapper[4756]: E1203 10:58:30.533553 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddf93b80ff0fa1ac18fbea0208b1ebf67ce3ea9d1005ba1a6d03eea5727dc312\": container with ID starting with ddf93b80ff0fa1ac18fbea0208b1ebf67ce3ea9d1005ba1a6d03eea5727dc312 not found: ID does not exist" containerID="ddf93b80ff0fa1ac18fbea0208b1ebf67ce3ea9d1005ba1a6d03eea5727dc312" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.533704 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddf93b80ff0fa1ac18fbea0208b1ebf67ce3ea9d1005ba1a6d03eea5727dc312"} err="failed to get container status \"ddf93b80ff0fa1ac18fbea0208b1ebf67ce3ea9d1005ba1a6d03eea5727dc312\": rpc error: code = NotFound desc = could not find container \"ddf93b80ff0fa1ac18fbea0208b1ebf67ce3ea9d1005ba1a6d03eea5727dc312\": container with ID starting with ddf93b80ff0fa1ac18fbea0208b1ebf67ce3ea9d1005ba1a6d03eea5727dc312 not found: ID does not exist" Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.570593 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t4rfh"] Dec 03 10:58:30 crc kubenswrapper[4756]: I1203 10:58:30.573257 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t4rfh"] Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.242888 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1541db70-51f2-4236-854e-6ec0f8fa3010" path="/var/lib/kubelet/pods/1541db70-51f2-4236-854e-6ec0f8fa3010/volumes" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.243760 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad" path="/var/lib/kubelet/pods/7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad/volumes" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.587294 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949"] Dec 03 10:58:31 crc kubenswrapper[4756]: E1203 10:58:31.587648 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad" containerName="controller-manager" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.587668 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad" containerName="controller-manager" Dec 03 10:58:31 crc kubenswrapper[4756]: E1203 10:58:31.587692 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.587698 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 10:58:31 crc kubenswrapper[4756]: E1203 10:58:31.587707 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1541db70-51f2-4236-854e-6ec0f8fa3010" containerName="route-controller-manager" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.587713 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1541db70-51f2-4236-854e-6ec0f8fa3010" containerName="route-controller-manager" Dec 03 10:58:31 crc kubenswrapper[4756]: E1203 10:58:31.587726 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" containerName="installer" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.587732 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" containerName="installer" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.587857 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0ab74b-8dc7-4cf5-bf8b-9bcf4f85e5ad" containerName="controller-manager" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.587873 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.587885 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1541db70-51f2-4236-854e-6ec0f8fa3010" containerName="route-controller-manager" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.587902 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24b32f1-10bc-4e1b-867a-edc2aab1a5ef" containerName="installer" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.588563 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.593114 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d5c67b778-g7wh8"] Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.594318 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.595908 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.596113 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.597273 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.597351 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.597708 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.598627 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.598737 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.599041 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.599245 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.599526 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.599564 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.600220 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.600602 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d5c67b778-g7wh8"] Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.607284 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949"] Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.609506 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.719130 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-client-ca\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.719756 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-serving-cert\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.719797 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-config\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.719835 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgz8g\" (UniqueName: \"kubernetes.io/projected/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-kube-api-access-bgz8g\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.719969 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf06062-ebd4-4a1d-8479-887f70e6ca8c-serving-cert\") pod \"route-controller-manager-59b55d474f-gw949\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.720044 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-proxy-ca-bundles\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.720073 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/daf06062-ebd4-4a1d-8479-887f70e6ca8c-client-ca\") pod \"route-controller-manager-59b55d474f-gw949\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.720125 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdhbs\" (UniqueName: \"kubernetes.io/projected/daf06062-ebd4-4a1d-8479-887f70e6ca8c-kube-api-access-gdhbs\") pod \"route-controller-manager-59b55d474f-gw949\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.720156 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf06062-ebd4-4a1d-8479-887f70e6ca8c-config\") pod \"route-controller-manager-59b55d474f-gw949\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.821663 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf06062-ebd4-4a1d-8479-887f70e6ca8c-serving-cert\") pod \"route-controller-manager-59b55d474f-gw949\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.821724 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-proxy-ca-bundles\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.821757 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/daf06062-ebd4-4a1d-8479-887f70e6ca8c-client-ca\") pod \"route-controller-manager-59b55d474f-gw949\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.821801 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdhbs\" (UniqueName: \"kubernetes.io/projected/daf06062-ebd4-4a1d-8479-887f70e6ca8c-kube-api-access-gdhbs\") pod \"route-controller-manager-59b55d474f-gw949\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.821826 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf06062-ebd4-4a1d-8479-887f70e6ca8c-config\") pod \"route-controller-manager-59b55d474f-gw949\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.821847 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-client-ca\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.821864 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-config\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.821881 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-serving-cert\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.821900 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgz8g\" (UniqueName: \"kubernetes.io/projected/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-kube-api-access-bgz8g\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.823229 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-client-ca\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.823753 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/daf06062-ebd4-4a1d-8479-887f70e6ca8c-client-ca\") pod \"route-controller-manager-59b55d474f-gw949\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.823997 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-proxy-ca-bundles\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.824281 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-config\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.824478 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf06062-ebd4-4a1d-8479-887f70e6ca8c-config\") pod \"route-controller-manager-59b55d474f-gw949\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.826518 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-serving-cert\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.827038 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf06062-ebd4-4a1d-8479-887f70e6ca8c-serving-cert\") pod \"route-controller-manager-59b55d474f-gw949\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.844369 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgz8g\" (UniqueName: \"kubernetes.io/projected/16c7b8ce-3336-4a95-83ae-aebc5f9d18e5-kube-api-access-bgz8g\") pod \"controller-manager-7d5c67b778-g7wh8\" (UID: \"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5\") " pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.847589 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdhbs\" (UniqueName: \"kubernetes.io/projected/daf06062-ebd4-4a1d-8479-887f70e6ca8c-kube-api-access-gdhbs\") pod \"route-controller-manager-59b55d474f-gw949\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.909469 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:31 crc kubenswrapper[4756]: I1203 10:58:31.917707 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:32 crc kubenswrapper[4756]: I1203 10:58:32.222730 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949"] Dec 03 10:58:32 crc kubenswrapper[4756]: I1203 10:58:32.382440 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d5c67b778-g7wh8"] Dec 03 10:58:32 crc kubenswrapper[4756]: W1203 10:58:32.386989 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16c7b8ce_3336_4a95_83ae_aebc5f9d18e5.slice/crio-c0834fdb8c104f8c95d27b42c7d2a93698a369e6ebfed4715653e1c66e9a3753 WatchSource:0}: Error finding container c0834fdb8c104f8c95d27b42c7d2a93698a369e6ebfed4715653e1c66e9a3753: Status 404 returned error can't find the container with id c0834fdb8c104f8c95d27b42c7d2a93698a369e6ebfed4715653e1c66e9a3753 Dec 03 10:58:32 crc kubenswrapper[4756]: I1203 10:58:32.502401 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" event={"ID":"daf06062-ebd4-4a1d-8479-887f70e6ca8c","Type":"ContainerStarted","Data":"08a13035edd02b4054d94af949efea8aab8ed51b332871958c19de6444255612"} Dec 03 10:58:32 crc kubenswrapper[4756]: I1203 10:58:32.504201 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" event={"ID":"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5","Type":"ContainerStarted","Data":"c0834fdb8c104f8c95d27b42c7d2a93698a369e6ebfed4715653e1c66e9a3753"} Dec 03 10:58:33 crc kubenswrapper[4756]: I1203 10:58:33.512187 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" event={"ID":"16c7b8ce-3336-4a95-83ae-aebc5f9d18e5","Type":"ContainerStarted","Data":"6203f4868f05c5677db95cda47b60b832f0dbed39408a3109c69a2d0d9252120"} Dec 03 10:58:33 crc kubenswrapper[4756]: I1203 10:58:33.514127 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:33 crc kubenswrapper[4756]: I1203 10:58:33.515532 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" event={"ID":"daf06062-ebd4-4a1d-8479-887f70e6ca8c","Type":"ContainerStarted","Data":"bed223474b0d8c55fe3b92b8f7450ad5aa9f24d16de687c95fa7757741fc63b3"} Dec 03 10:58:33 crc kubenswrapper[4756]: I1203 10:58:33.515866 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:33 crc kubenswrapper[4756]: I1203 10:58:33.520431 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" Dec 03 10:58:33 crc kubenswrapper[4756]: I1203 10:58:33.525239 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:33 crc kubenswrapper[4756]: I1203 10:58:33.538490 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d5c67b778-g7wh8" podStartSLOduration=4.538460008 podStartE2EDuration="4.538460008s" podCreationTimestamp="2025-12-03 10:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:58:33.536224055 +0000 UTC m=+324.566225329" watchObservedRunningTime="2025-12-03 10:58:33.538460008 +0000 UTC m=+324.568461272" Dec 03 10:58:58 crc kubenswrapper[4756]: I1203 10:58:58.323589 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" podStartSLOduration=29.323567364 podStartE2EDuration="29.323567364s" podCreationTimestamp="2025-12-03 10:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:58:33.582242511 +0000 UTC m=+324.612243775" watchObservedRunningTime="2025-12-03 10:58:58.323567364 +0000 UTC m=+349.353568608" Dec 03 10:58:58 crc kubenswrapper[4756]: I1203 10:58:58.328023 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949"] Dec 03 10:58:58 crc kubenswrapper[4756]: I1203 10:58:58.328260 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" podUID="daf06062-ebd4-4a1d-8479-887f70e6ca8c" containerName="route-controller-manager" containerID="cri-o://bed223474b0d8c55fe3b92b8f7450ad5aa9f24d16de687c95fa7757741fc63b3" gracePeriod=30 Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.594243 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.625090 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll"] Dec 03 10:58:59 crc kubenswrapper[4756]: E1203 10:58:59.625408 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf06062-ebd4-4a1d-8479-887f70e6ca8c" containerName="route-controller-manager" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.625431 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf06062-ebd4-4a1d-8479-887f70e6ca8c" containerName="route-controller-manager" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.625578 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="daf06062-ebd4-4a1d-8479-887f70e6ca8c" containerName="route-controller-manager" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.626061 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.639218 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll"] Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.680877 4756 generic.go:334] "Generic (PLEG): container finished" podID="daf06062-ebd4-4a1d-8479-887f70e6ca8c" containerID="bed223474b0d8c55fe3b92b8f7450ad5aa9f24d16de687c95fa7757741fc63b3" exitCode=0 Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.680937 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" event={"ID":"daf06062-ebd4-4a1d-8479-887f70e6ca8c","Type":"ContainerDied","Data":"bed223474b0d8c55fe3b92b8f7450ad5aa9f24d16de687c95fa7757741fc63b3"} Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.681020 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.681042 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949" event={"ID":"daf06062-ebd4-4a1d-8479-887f70e6ca8c","Type":"ContainerDied","Data":"08a13035edd02b4054d94af949efea8aab8ed51b332871958c19de6444255612"} Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.681069 4756 scope.go:117] "RemoveContainer" containerID="bed223474b0d8c55fe3b92b8f7450ad5aa9f24d16de687c95fa7757741fc63b3" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.713359 4756 scope.go:117] "RemoveContainer" containerID="bed223474b0d8c55fe3b92b8f7450ad5aa9f24d16de687c95fa7757741fc63b3" Dec 03 10:58:59 crc kubenswrapper[4756]: E1203 10:58:59.715742 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed223474b0d8c55fe3b92b8f7450ad5aa9f24d16de687c95fa7757741fc63b3\": container with ID starting with bed223474b0d8c55fe3b92b8f7450ad5aa9f24d16de687c95fa7757741fc63b3 not found: ID does not exist" containerID="bed223474b0d8c55fe3b92b8f7450ad5aa9f24d16de687c95fa7757741fc63b3" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.715792 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed223474b0d8c55fe3b92b8f7450ad5aa9f24d16de687c95fa7757741fc63b3"} err="failed to get container status \"bed223474b0d8c55fe3b92b8f7450ad5aa9f24d16de687c95fa7757741fc63b3\": rpc error: code = NotFound desc = could not find container \"bed223474b0d8c55fe3b92b8f7450ad5aa9f24d16de687c95fa7757741fc63b3\": container with ID starting with bed223474b0d8c55fe3b92b8f7450ad5aa9f24d16de687c95fa7757741fc63b3 not found: ID does not exist" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.716077 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf06062-ebd4-4a1d-8479-887f70e6ca8c-serving-cert\") pod \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.716417 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/daf06062-ebd4-4a1d-8479-887f70e6ca8c-client-ca\") pod \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.716455 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf06062-ebd4-4a1d-8479-887f70e6ca8c-config\") pod \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.716483 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdhbs\" (UniqueName: \"kubernetes.io/projected/daf06062-ebd4-4a1d-8479-887f70e6ca8c-kube-api-access-gdhbs\") pod \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\" (UID: \"daf06062-ebd4-4a1d-8479-887f70e6ca8c\") " Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.716577 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d3f8b9d-631a-4e75-be0a-0e8bd1632128-client-ca\") pod \"route-controller-manager-67dff4c5c9-8bfll\" (UID: \"0d3f8b9d-631a-4e75-be0a-0e8bd1632128\") " pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.716603 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3f8b9d-631a-4e75-be0a-0e8bd1632128-serving-cert\") pod \"route-controller-manager-67dff4c5c9-8bfll\" (UID: \"0d3f8b9d-631a-4e75-be0a-0e8bd1632128\") " pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.716639 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wclnp\" (UniqueName: \"kubernetes.io/projected/0d3f8b9d-631a-4e75-be0a-0e8bd1632128-kube-api-access-wclnp\") pod \"route-controller-manager-67dff4c5c9-8bfll\" (UID: \"0d3f8b9d-631a-4e75-be0a-0e8bd1632128\") " pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.716823 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3f8b9d-631a-4e75-be0a-0e8bd1632128-config\") pod \"route-controller-manager-67dff4c5c9-8bfll\" (UID: \"0d3f8b9d-631a-4e75-be0a-0e8bd1632128\") " pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.717135 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daf06062-ebd4-4a1d-8479-887f70e6ca8c-config" (OuterVolumeSpecName: "config") pod "daf06062-ebd4-4a1d-8479-887f70e6ca8c" (UID: "daf06062-ebd4-4a1d-8479-887f70e6ca8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.717268 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daf06062-ebd4-4a1d-8479-887f70e6ca8c-client-ca" (OuterVolumeSpecName: "client-ca") pod "daf06062-ebd4-4a1d-8479-887f70e6ca8c" (UID: "daf06062-ebd4-4a1d-8479-887f70e6ca8c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.721937 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf06062-ebd4-4a1d-8479-887f70e6ca8c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "daf06062-ebd4-4a1d-8479-887f70e6ca8c" (UID: "daf06062-ebd4-4a1d-8479-887f70e6ca8c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.725168 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daf06062-ebd4-4a1d-8479-887f70e6ca8c-kube-api-access-gdhbs" (OuterVolumeSpecName: "kube-api-access-gdhbs") pod "daf06062-ebd4-4a1d-8479-887f70e6ca8c" (UID: "daf06062-ebd4-4a1d-8479-887f70e6ca8c"). InnerVolumeSpecName "kube-api-access-gdhbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.818015 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wclnp\" (UniqueName: \"kubernetes.io/projected/0d3f8b9d-631a-4e75-be0a-0e8bd1632128-kube-api-access-wclnp\") pod \"route-controller-manager-67dff4c5c9-8bfll\" (UID: \"0d3f8b9d-631a-4e75-be0a-0e8bd1632128\") " pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.818366 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3f8b9d-631a-4e75-be0a-0e8bd1632128-config\") pod \"route-controller-manager-67dff4c5c9-8bfll\" (UID: \"0d3f8b9d-631a-4e75-be0a-0e8bd1632128\") " pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.818511 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d3f8b9d-631a-4e75-be0a-0e8bd1632128-client-ca\") pod \"route-controller-manager-67dff4c5c9-8bfll\" (UID: \"0d3f8b9d-631a-4e75-be0a-0e8bd1632128\") " pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.818612 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3f8b9d-631a-4e75-be0a-0e8bd1632128-serving-cert\") pod \"route-controller-manager-67dff4c5c9-8bfll\" (UID: \"0d3f8b9d-631a-4e75-be0a-0e8bd1632128\") " pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.818747 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/daf06062-ebd4-4a1d-8479-887f70e6ca8c-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.818823 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf06062-ebd4-4a1d-8479-887f70e6ca8c-config\") on node \"crc\" DevicePath \"\"" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.818895 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdhbs\" (UniqueName: \"kubernetes.io/projected/daf06062-ebd4-4a1d-8479-887f70e6ca8c-kube-api-access-gdhbs\") on node \"crc\" DevicePath \"\"" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.819034 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/daf06062-ebd4-4a1d-8479-887f70e6ca8c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.819355 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d3f8b9d-631a-4e75-be0a-0e8bd1632128-client-ca\") pod \"route-controller-manager-67dff4c5c9-8bfll\" (UID: \"0d3f8b9d-631a-4e75-be0a-0e8bd1632128\") " pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.819741 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3f8b9d-631a-4e75-be0a-0e8bd1632128-config\") pod \"route-controller-manager-67dff4c5c9-8bfll\" (UID: \"0d3f8b9d-631a-4e75-be0a-0e8bd1632128\") " pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.821741 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3f8b9d-631a-4e75-be0a-0e8bd1632128-serving-cert\") pod \"route-controller-manager-67dff4c5c9-8bfll\" (UID: \"0d3f8b9d-631a-4e75-be0a-0e8bd1632128\") " pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.834782 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wclnp\" (UniqueName: \"kubernetes.io/projected/0d3f8b9d-631a-4e75-be0a-0e8bd1632128-kube-api-access-wclnp\") pod \"route-controller-manager-67dff4c5c9-8bfll\" (UID: \"0d3f8b9d-631a-4e75-be0a-0e8bd1632128\") " pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:58:59 crc kubenswrapper[4756]: I1203 10:58:59.946376 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:59:00 crc kubenswrapper[4756]: I1203 10:59:00.025402 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949"] Dec 03 10:59:00 crc kubenswrapper[4756]: I1203 10:59:00.031508 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b55d474f-gw949"] Dec 03 10:59:00 crc kubenswrapper[4756]: I1203 10:59:00.389506 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll"] Dec 03 10:59:00 crc kubenswrapper[4756]: I1203 10:59:00.687188 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" event={"ID":"0d3f8b9d-631a-4e75-be0a-0e8bd1632128","Type":"ContainerStarted","Data":"4b7272fb5c0264f5e623ae18501fdec2d3e494db095e1eeb31107db83d633aa0"} Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.177483 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h2xqh"] Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.178502 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.198355 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h2xqh"] Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.241854 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daf06062-ebd4-4a1d-8479-887f70e6ca8c" path="/var/lib/kubelet/pods/daf06062-ebd4-4a1d-8479-887f70e6ca8c/volumes" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.340461 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-registry-certificates\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.340522 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xcgg\" (UniqueName: \"kubernetes.io/projected/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-kube-api-access-9xcgg\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.340732 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-bound-sa-token\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.340814 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-registry-tls\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.340973 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.341006 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-trusted-ca\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.341077 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.341145 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.377660 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.442749 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.442826 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-trusted-ca\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.442886 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.442925 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-registry-certificates\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.442973 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xcgg\" (UniqueName: \"kubernetes.io/projected/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-kube-api-access-9xcgg\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.443010 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-bound-sa-token\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.443647 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.443736 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-registry-tls\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.444592 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-registry-certificates\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.444910 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-trusted-ca\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.449604 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-registry-tls\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.449671 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.466791 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-bound-sa-token\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.468240 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xcgg\" (UniqueName: \"kubernetes.io/projected/0fc3bdf0-a121-480c-bafa-0741b1a9c9d2-kube-api-access-9xcgg\") pod \"image-registry-66df7c8f76-h2xqh\" (UID: \"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2\") " pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.504645 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.696403 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" event={"ID":"0d3f8b9d-631a-4e75-be0a-0e8bd1632128","Type":"ContainerStarted","Data":"510f4bb2ee02ad0729d97040d8162f88679ebcaed35fc136f2fe163c02961a97"} Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.696628 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.703196 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.716720 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67dff4c5c9-8bfll" podStartSLOduration=3.716654215 podStartE2EDuration="3.716654215s" podCreationTimestamp="2025-12-03 10:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:59:01.712412457 +0000 UTC m=+352.742413721" watchObservedRunningTime="2025-12-03 10:59:01.716654215 +0000 UTC m=+352.746655459" Dec 03 10:59:01 crc kubenswrapper[4756]: I1203 10:59:01.923122 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h2xqh"] Dec 03 10:59:02 crc kubenswrapper[4756]: I1203 10:59:02.705014 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" event={"ID":"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2","Type":"ContainerStarted","Data":"de5edd0edee8c8433b3fb22e9ec7b3a94c0c32f8062ea20b8eb2a1a0df57dae5"} Dec 03 10:59:02 crc kubenswrapper[4756]: I1203 10:59:02.705490 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" event={"ID":"0fc3bdf0-a121-480c-bafa-0741b1a9c9d2","Type":"ContainerStarted","Data":"78cfe0a5f9dfca9e423c7e2ed8a0ba7f6d0f4f54baa4f2e407e089c833ed795f"} Dec 03 10:59:02 crc kubenswrapper[4756]: I1203 10:59:02.730757 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" podStartSLOduration=1.730732423 podStartE2EDuration="1.730732423s" podCreationTimestamp="2025-12-03 10:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:59:02.727695144 +0000 UTC m=+353.757696408" watchObservedRunningTime="2025-12-03 10:59:02.730732423 +0000 UTC m=+353.760733667" Dec 03 10:59:03 crc kubenswrapper[4756]: I1203 10:59:03.710358 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.210524 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2cjlg"] Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.211685 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2cjlg" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" containerName="registry-server" containerID="cri-o://c54f8384f5219529485e29e4d28268c95c4ca10c8df75fbd3c3b3a17705cad55" gracePeriod=30 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.216054 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-82dpl"] Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.216382 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-82dpl" podUID="35b0d395-c4ca-4956-9f3e-134814838598" containerName="registry-server" containerID="cri-o://51d89103908260a8c16c9eecc810331441e149265193c48cf640249e782aee9f" gracePeriod=30 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.222368 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-972dg"] Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.223044 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-972dg" podUID="a4514e32-bd45-46cf-b849-42c2d941f777" containerName="registry-server" containerID="cri-o://abb4cd8b9761781d791d711ad9d8e912bc8d959082b6a14e266247836984bad9" gracePeriod=30 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.238559 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7rfwn"] Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.238988 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" podUID="d6cddd35-757a-487a-afb5-d75d73224aee" containerName="marketplace-operator" containerID="cri-o://9edfa61f84a5cf7d6a42baab61887d6d5c2b8705ebbb79acdb7456209cef2f05" gracePeriod=30 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.252689 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5r5m9"] Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.253048 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5r5m9" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" containerName="registry-server" containerID="cri-o://25aa9d61519277f71e2276ce680f0ca9a597bcf012da4e9eb16df40300683ce4" gracePeriod=30 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.271708 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zc78"] Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.272174 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5zc78" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" containerName="registry-server" containerID="cri-o://ebf7579b2e64885f300160716f39de69a6bc1fe066e53ece5b68f94af6bf95d1" gracePeriod=30 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.278366 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r52tb"] Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.279609 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.285308 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9zs5t"] Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.285870 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9zs5t" podUID="1759c8af-db81-4841-a773-d8e3aaa6d9f2" containerName="registry-server" containerID="cri-o://41481de92048d13956bcd9334b577f1a624a0dcb1a3e1b34f32c02c0a7122788" gracePeriod=30 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.304169 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r52tb"] Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.307348 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ghq75"] Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.307586 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ghq75" podUID="37203e5a-c71e-4397-b96c-8f152834e488" containerName="registry-server" containerID="cri-o://c54803b650b98d17e1f0bfcf56b5c06d97c1d82b8b62a4875e7b4542ebf5ceea" gracePeriod=30 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.349754 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qdbk\" (UniqueName: \"kubernetes.io/projected/7f5dea91-6dce-4093-a943-05e3359b754d-kube-api-access-4qdbk\") pod \"marketplace-operator-79b997595-r52tb\" (UID: \"7f5dea91-6dce-4093-a943-05e3359b754d\") " pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.349863 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f5dea91-6dce-4093-a943-05e3359b754d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r52tb\" (UID: \"7f5dea91-6dce-4093-a943-05e3359b754d\") " pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.349945 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f5dea91-6dce-4093-a943-05e3359b754d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r52tb\" (UID: \"7f5dea91-6dce-4093-a943-05e3359b754d\") " pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.451148 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qdbk\" (UniqueName: \"kubernetes.io/projected/7f5dea91-6dce-4093-a943-05e3359b754d-kube-api-access-4qdbk\") pod \"marketplace-operator-79b997595-r52tb\" (UID: \"7f5dea91-6dce-4093-a943-05e3359b754d\") " pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.451287 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f5dea91-6dce-4093-a943-05e3359b754d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r52tb\" (UID: \"7f5dea91-6dce-4093-a943-05e3359b754d\") " pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.451374 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f5dea91-6dce-4093-a943-05e3359b754d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r52tb\" (UID: \"7f5dea91-6dce-4093-a943-05e3359b754d\") " pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.454081 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f5dea91-6dce-4093-a943-05e3359b754d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r52tb\" (UID: \"7f5dea91-6dce-4093-a943-05e3359b754d\") " pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.467488 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7f5dea91-6dce-4093-a943-05e3359b754d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r52tb\" (UID: \"7f5dea91-6dce-4093-a943-05e3359b754d\") " pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.477685 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qdbk\" (UniqueName: \"kubernetes.io/projected/7f5dea91-6dce-4093-a943-05e3359b754d-kube-api-access-4qdbk\") pod \"marketplace-operator-79b997595-r52tb\" (UID: \"7f5dea91-6dce-4093-a943-05e3359b754d\") " pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" Dec 03 10:59:14 crc kubenswrapper[4756]: E1203 10:59:14.577296 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 25aa9d61519277f71e2276ce680f0ca9a597bcf012da4e9eb16df40300683ce4 is running failed: container process not found" containerID="25aa9d61519277f71e2276ce680f0ca9a597bcf012da4e9eb16df40300683ce4" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 10:59:14 crc kubenswrapper[4756]: E1203 10:59:14.578675 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 25aa9d61519277f71e2276ce680f0ca9a597bcf012da4e9eb16df40300683ce4 is running failed: container process not found" containerID="25aa9d61519277f71e2276ce680f0ca9a597bcf012da4e9eb16df40300683ce4" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 10:59:14 crc kubenswrapper[4756]: E1203 10:59:14.580289 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 25aa9d61519277f71e2276ce680f0ca9a597bcf012da4e9eb16df40300683ce4 is running failed: container process not found" containerID="25aa9d61519277f71e2276ce680f0ca9a597bcf012da4e9eb16df40300683ce4" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 10:59:14 crc kubenswrapper[4756]: E1203 10:59:14.580446 4756 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 25aa9d61519277f71e2276ce680f0ca9a597bcf012da4e9eb16df40300683ce4 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-5r5m9" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" containerName="registry-server" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.784598 4756 generic.go:334] "Generic (PLEG): container finished" podID="37203e5a-c71e-4397-b96c-8f152834e488" containerID="c54803b650b98d17e1f0bfcf56b5c06d97c1d82b8b62a4875e7b4542ebf5ceea" exitCode=0 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.784664 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghq75" event={"ID":"37203e5a-c71e-4397-b96c-8f152834e488","Type":"ContainerDied","Data":"c54803b650b98d17e1f0bfcf56b5c06d97c1d82b8b62a4875e7b4542ebf5ceea"} Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.797773 4756 generic.go:334] "Generic (PLEG): container finished" podID="a4514e32-bd45-46cf-b849-42c2d941f777" containerID="abb4cd8b9761781d791d711ad9d8e912bc8d959082b6a14e266247836984bad9" exitCode=0 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.797881 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-972dg" event={"ID":"a4514e32-bd45-46cf-b849-42c2d941f777","Type":"ContainerDied","Data":"abb4cd8b9761781d791d711ad9d8e912bc8d959082b6a14e266247836984bad9"} Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.804537 4756 generic.go:334] "Generic (PLEG): container finished" podID="d6cddd35-757a-487a-afb5-d75d73224aee" containerID="9edfa61f84a5cf7d6a42baab61887d6d5c2b8705ebbb79acdb7456209cef2f05" exitCode=0 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.804625 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" event={"ID":"d6cddd35-757a-487a-afb5-d75d73224aee","Type":"ContainerDied","Data":"9edfa61f84a5cf7d6a42baab61887d6d5c2b8705ebbb79acdb7456209cef2f05"} Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.804670 4756 scope.go:117] "RemoveContainer" containerID="2e4a02a3ec77872668d7c1ba4c8fd250926b509a8cf30093f1510434e7cb8534" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.812661 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.823761 4756 generic.go:334] "Generic (PLEG): container finished" podID="361ccc2b-5e39-4a12-ae52-3926f47f097d" containerID="25aa9d61519277f71e2276ce680f0ca9a597bcf012da4e9eb16df40300683ce4" exitCode=0 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.823832 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5r5m9" event={"ID":"361ccc2b-5e39-4a12-ae52-3926f47f097d","Type":"ContainerDied","Data":"25aa9d61519277f71e2276ce680f0ca9a597bcf012da4e9eb16df40300683ce4"} Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.826109 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.827490 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.839673 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.841745 4756 generic.go:334] "Generic (PLEG): container finished" podID="1759c8af-db81-4841-a773-d8e3aaa6d9f2" containerID="41481de92048d13956bcd9334b577f1a624a0dcb1a3e1b34f32c02c0a7122788" exitCode=0 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.841835 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zs5t" event={"ID":"1759c8af-db81-4841-a773-d8e3aaa6d9f2","Type":"ContainerDied","Data":"41481de92048d13956bcd9334b577f1a624a0dcb1a3e1b34f32c02c0a7122788"} Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.843234 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.866994 4756 generic.go:334] "Generic (PLEG): container finished" podID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" containerID="c54f8384f5219529485e29e4d28268c95c4ca10c8df75fbd3c3b3a17705cad55" exitCode=0 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.867100 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2cjlg" event={"ID":"c971a632-2e34-4783-bb0a-9e516fb8bdbd","Type":"ContainerDied","Data":"c54f8384f5219529485e29e4d28268c95c4ca10c8df75fbd3c3b3a17705cad55"} Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.870756 4756 generic.go:334] "Generic (PLEG): container finished" podID="72f3b1cb-9933-4c78-ad7a-d27f873da187" containerID="ebf7579b2e64885f300160716f39de69a6bc1fe066e53ece5b68f94af6bf95d1" exitCode=0 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.870814 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zc78" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.870840 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zc78" event={"ID":"72f3b1cb-9933-4c78-ad7a-d27f873da187","Type":"ContainerDied","Data":"ebf7579b2e64885f300160716f39de69a6bc1fe066e53ece5b68f94af6bf95d1"} Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.870883 4756 scope.go:117] "RemoveContainer" containerID="ebf7579b2e64885f300160716f39de69a6bc1fe066e53ece5b68f94af6bf95d1" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.871975 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-972dg" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.874269 4756 generic.go:334] "Generic (PLEG): container finished" podID="35b0d395-c4ca-4956-9f3e-134814838598" containerID="51d89103908260a8c16c9eecc810331441e149265193c48cf640249e782aee9f" exitCode=0 Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.874327 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82dpl" event={"ID":"35b0d395-c4ca-4956-9f3e-134814838598","Type":"ContainerDied","Data":"51d89103908260a8c16c9eecc810331441e149265193c48cf640249e782aee9f"} Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.874384 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-82dpl" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.874411 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-82dpl" event={"ID":"35b0d395-c4ca-4956-9f3e-134814838598","Type":"ContainerDied","Data":"99b48f65522abc0b8172c6aa8c02c9ee1a347d0903bb7207c285010d1229d9ee"} Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.880375 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.892562 4756 scope.go:117] "RemoveContainer" containerID="c489f4258c2a668bd8dfd78be266cb4bb589aa9e372e12fa91c5065dfe9b863a" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.896559 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.900901 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.921920 4756 scope.go:117] "RemoveContainer" containerID="6feeb57ff3e34510fd89605c4581a34c1aadf136bf636cf71cd2eae6ba7dda2e" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.957562 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1759c8af-db81-4841-a773-d8e3aaa6d9f2-utilities\") pod \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\" (UID: \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.957624 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/361ccc2b-5e39-4a12-ae52-3926f47f097d-catalog-content\") pod \"361ccc2b-5e39-4a12-ae52-3926f47f097d\" (UID: \"361ccc2b-5e39-4a12-ae52-3926f47f097d\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.957657 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmh62\" (UniqueName: \"kubernetes.io/projected/72f3b1cb-9933-4c78-ad7a-d27f873da187-kube-api-access-wmh62\") pod \"72f3b1cb-9933-4c78-ad7a-d27f873da187\" (UID: \"72f3b1cb-9933-4c78-ad7a-d27f873da187\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.957682 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdh86\" (UniqueName: \"kubernetes.io/projected/c971a632-2e34-4783-bb0a-9e516fb8bdbd-kube-api-access-hdh86\") pod \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\" (UID: \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.957705 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b0d395-c4ca-4956-9f3e-134814838598-catalog-content\") pod \"35b0d395-c4ca-4956-9f3e-134814838598\" (UID: \"35b0d395-c4ca-4956-9f3e-134814838598\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.957728 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37203e5a-c71e-4397-b96c-8f152834e488-catalog-content\") pod \"37203e5a-c71e-4397-b96c-8f152834e488\" (UID: \"37203e5a-c71e-4397-b96c-8f152834e488\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.957769 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4514e32-bd45-46cf-b849-42c2d941f777-catalog-content\") pod \"a4514e32-bd45-46cf-b849-42c2d941f777\" (UID: \"a4514e32-bd45-46cf-b849-42c2d941f777\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.957797 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mcw8\" (UniqueName: \"kubernetes.io/projected/d6cddd35-757a-487a-afb5-d75d73224aee-kube-api-access-2mcw8\") pod \"d6cddd35-757a-487a-afb5-d75d73224aee\" (UID: \"d6cddd35-757a-487a-afb5-d75d73224aee\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.957825 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c971a632-2e34-4783-bb0a-9e516fb8bdbd-catalog-content\") pod \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\" (UID: \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.957854 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1759c8af-db81-4841-a773-d8e3aaa6d9f2-catalog-content\") pod \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\" (UID: \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.957879 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8xj6\" (UniqueName: \"kubernetes.io/projected/37203e5a-c71e-4397-b96c-8f152834e488-kube-api-access-m8xj6\") pod \"37203e5a-c71e-4397-b96c-8f152834e488\" (UID: \"37203e5a-c71e-4397-b96c-8f152834e488\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.957904 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72f3b1cb-9933-4c78-ad7a-d27f873da187-catalog-content\") pod \"72f3b1cb-9933-4c78-ad7a-d27f873da187\" (UID: \"72f3b1cb-9933-4c78-ad7a-d27f873da187\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.957930 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d6cddd35-757a-487a-afb5-d75d73224aee-marketplace-operator-metrics\") pod \"d6cddd35-757a-487a-afb5-d75d73224aee\" (UID: \"d6cddd35-757a-487a-afb5-d75d73224aee\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.957987 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b0d395-c4ca-4956-9f3e-134814838598-utilities\") pod \"35b0d395-c4ca-4956-9f3e-134814838598\" (UID: \"35b0d395-c4ca-4956-9f3e-134814838598\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.958003 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sschx\" (UniqueName: \"kubernetes.io/projected/35b0d395-c4ca-4956-9f3e-134814838598-kube-api-access-sschx\") pod \"35b0d395-c4ca-4956-9f3e-134814838598\" (UID: \"35b0d395-c4ca-4956-9f3e-134814838598\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.958027 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c971a632-2e34-4783-bb0a-9e516fb8bdbd-utilities\") pod \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\" (UID: \"c971a632-2e34-4783-bb0a-9e516fb8bdbd\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.958045 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72f3b1cb-9933-4c78-ad7a-d27f873da187-utilities\") pod \"72f3b1cb-9933-4c78-ad7a-d27f873da187\" (UID: \"72f3b1cb-9933-4c78-ad7a-d27f873da187\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.958067 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6cddd35-757a-487a-afb5-d75d73224aee-marketplace-trusted-ca\") pod \"d6cddd35-757a-487a-afb5-d75d73224aee\" (UID: \"d6cddd35-757a-487a-afb5-d75d73224aee\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.958105 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h26n9\" (UniqueName: \"kubernetes.io/projected/361ccc2b-5e39-4a12-ae52-3926f47f097d-kube-api-access-h26n9\") pod \"361ccc2b-5e39-4a12-ae52-3926f47f097d\" (UID: \"361ccc2b-5e39-4a12-ae52-3926f47f097d\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.958128 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4514e32-bd45-46cf-b849-42c2d941f777-utilities\") pod \"a4514e32-bd45-46cf-b849-42c2d941f777\" (UID: \"a4514e32-bd45-46cf-b849-42c2d941f777\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.958146 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpw2p\" (UniqueName: \"kubernetes.io/projected/a4514e32-bd45-46cf-b849-42c2d941f777-kube-api-access-mpw2p\") pod \"a4514e32-bd45-46cf-b849-42c2d941f777\" (UID: \"a4514e32-bd45-46cf-b849-42c2d941f777\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.958161 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37203e5a-c71e-4397-b96c-8f152834e488-utilities\") pod \"37203e5a-c71e-4397-b96c-8f152834e488\" (UID: \"37203e5a-c71e-4397-b96c-8f152834e488\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.958180 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prxp6\" (UniqueName: \"kubernetes.io/projected/1759c8af-db81-4841-a773-d8e3aaa6d9f2-kube-api-access-prxp6\") pod \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\" (UID: \"1759c8af-db81-4841-a773-d8e3aaa6d9f2\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.958202 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/361ccc2b-5e39-4a12-ae52-3926f47f097d-utilities\") pod \"361ccc2b-5e39-4a12-ae52-3926f47f097d\" (UID: \"361ccc2b-5e39-4a12-ae52-3926f47f097d\") " Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.961613 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35b0d395-c4ca-4956-9f3e-134814838598-utilities" (OuterVolumeSpecName: "utilities") pod "35b0d395-c4ca-4956-9f3e-134814838598" (UID: "35b0d395-c4ca-4956-9f3e-134814838598"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.965055 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72f3b1cb-9933-4c78-ad7a-d27f873da187-kube-api-access-wmh62" (OuterVolumeSpecName: "kube-api-access-wmh62") pod "72f3b1cb-9933-4c78-ad7a-d27f873da187" (UID: "72f3b1cb-9933-4c78-ad7a-d27f873da187"). InnerVolumeSpecName "kube-api-access-wmh62". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.968186 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37203e5a-c71e-4397-b96c-8f152834e488-utilities" (OuterVolumeSpecName: "utilities") pod "37203e5a-c71e-4397-b96c-8f152834e488" (UID: "37203e5a-c71e-4397-b96c-8f152834e488"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.969512 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/361ccc2b-5e39-4a12-ae52-3926f47f097d-kube-api-access-h26n9" (OuterVolumeSpecName: "kube-api-access-h26n9") pod "361ccc2b-5e39-4a12-ae52-3926f47f097d" (UID: "361ccc2b-5e39-4a12-ae52-3926f47f097d"). InnerVolumeSpecName "kube-api-access-h26n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.971166 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4514e32-bd45-46cf-b849-42c2d941f777-utilities" (OuterVolumeSpecName: "utilities") pod "a4514e32-bd45-46cf-b849-42c2d941f777" (UID: "a4514e32-bd45-46cf-b849-42c2d941f777"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.971232 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35b0d395-c4ca-4956-9f3e-134814838598-kube-api-access-sschx" (OuterVolumeSpecName: "kube-api-access-sschx") pod "35b0d395-c4ca-4956-9f3e-134814838598" (UID: "35b0d395-c4ca-4956-9f3e-134814838598"). InnerVolumeSpecName "kube-api-access-sschx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.971610 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1759c8af-db81-4841-a773-d8e3aaa6d9f2-utilities" (OuterVolumeSpecName: "utilities") pod "1759c8af-db81-4841-a773-d8e3aaa6d9f2" (UID: "1759c8af-db81-4841-a773-d8e3aaa6d9f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.977624 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c971a632-2e34-4783-bb0a-9e516fb8bdbd-kube-api-access-hdh86" (OuterVolumeSpecName: "kube-api-access-hdh86") pod "c971a632-2e34-4783-bb0a-9e516fb8bdbd" (UID: "c971a632-2e34-4783-bb0a-9e516fb8bdbd"). InnerVolumeSpecName "kube-api-access-hdh86". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.979731 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6cddd35-757a-487a-afb5-d75d73224aee-kube-api-access-2mcw8" (OuterVolumeSpecName: "kube-api-access-2mcw8") pod "d6cddd35-757a-487a-afb5-d75d73224aee" (UID: "d6cddd35-757a-487a-afb5-d75d73224aee"). InnerVolumeSpecName "kube-api-access-2mcw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.984228 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/361ccc2b-5e39-4a12-ae52-3926f47f097d-utilities" (OuterVolumeSpecName: "utilities") pod "361ccc2b-5e39-4a12-ae52-3926f47f097d" (UID: "361ccc2b-5e39-4a12-ae52-3926f47f097d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.987768 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c971a632-2e34-4783-bb0a-9e516fb8bdbd-utilities" (OuterVolumeSpecName: "utilities") pod "c971a632-2e34-4783-bb0a-9e516fb8bdbd" (UID: "c971a632-2e34-4783-bb0a-9e516fb8bdbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.988346 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72f3b1cb-9933-4c78-ad7a-d27f873da187-utilities" (OuterVolumeSpecName: "utilities") pod "72f3b1cb-9933-4c78-ad7a-d27f873da187" (UID: "72f3b1cb-9933-4c78-ad7a-d27f873da187"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.988426 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4514e32-bd45-46cf-b849-42c2d941f777-kube-api-access-mpw2p" (OuterVolumeSpecName: "kube-api-access-mpw2p") pod "a4514e32-bd45-46cf-b849-42c2d941f777" (UID: "a4514e32-bd45-46cf-b849-42c2d941f777"). InnerVolumeSpecName "kube-api-access-mpw2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.988698 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1759c8af-db81-4841-a773-d8e3aaa6d9f2-kube-api-access-prxp6" (OuterVolumeSpecName: "kube-api-access-prxp6") pod "1759c8af-db81-4841-a773-d8e3aaa6d9f2" (UID: "1759c8af-db81-4841-a773-d8e3aaa6d9f2"). InnerVolumeSpecName "kube-api-access-prxp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.988770 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37203e5a-c71e-4397-b96c-8f152834e488-kube-api-access-m8xj6" (OuterVolumeSpecName: "kube-api-access-m8xj6") pod "37203e5a-c71e-4397-b96c-8f152834e488" (UID: "37203e5a-c71e-4397-b96c-8f152834e488"). InnerVolumeSpecName "kube-api-access-m8xj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:59:14 crc kubenswrapper[4756]: I1203 10:59:14.988807 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6cddd35-757a-487a-afb5-d75d73224aee-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d6cddd35-757a-487a-afb5-d75d73224aee" (UID: "d6cddd35-757a-487a-afb5-d75d73224aee"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.044913 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6cddd35-757a-487a-afb5-d75d73224aee-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d6cddd35-757a-487a-afb5-d75d73224aee" (UID: "d6cddd35-757a-487a-afb5-d75d73224aee"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.047147 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72f3b1cb-9933-4c78-ad7a-d27f873da187-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72f3b1cb-9933-4c78-ad7a-d27f873da187" (UID: "72f3b1cb-9933-4c78-ad7a-d27f873da187"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.053279 4756 scope.go:117] "RemoveContainer" containerID="51d89103908260a8c16c9eecc810331441e149265193c48cf640249e782aee9f" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062077 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35b0d395-c4ca-4956-9f3e-134814838598-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062115 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sschx\" (UniqueName: \"kubernetes.io/projected/35b0d395-c4ca-4956-9f3e-134814838598-kube-api-access-sschx\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062128 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c971a632-2e34-4783-bb0a-9e516fb8bdbd-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062140 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72f3b1cb-9933-4c78-ad7a-d27f873da187-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062151 4756 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6cddd35-757a-487a-afb5-d75d73224aee-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062160 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h26n9\" (UniqueName: \"kubernetes.io/projected/361ccc2b-5e39-4a12-ae52-3926f47f097d-kube-api-access-h26n9\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062170 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4514e32-bd45-46cf-b849-42c2d941f777-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062179 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpw2p\" (UniqueName: \"kubernetes.io/projected/a4514e32-bd45-46cf-b849-42c2d941f777-kube-api-access-mpw2p\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062188 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37203e5a-c71e-4397-b96c-8f152834e488-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062196 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prxp6\" (UniqueName: \"kubernetes.io/projected/1759c8af-db81-4841-a773-d8e3aaa6d9f2-kube-api-access-prxp6\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062204 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/361ccc2b-5e39-4a12-ae52-3926f47f097d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062212 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1759c8af-db81-4841-a773-d8e3aaa6d9f2-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062221 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmh62\" (UniqueName: \"kubernetes.io/projected/72f3b1cb-9933-4c78-ad7a-d27f873da187-kube-api-access-wmh62\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062230 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdh86\" (UniqueName: \"kubernetes.io/projected/c971a632-2e34-4783-bb0a-9e516fb8bdbd-kube-api-access-hdh86\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062238 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mcw8\" (UniqueName: \"kubernetes.io/projected/d6cddd35-757a-487a-afb5-d75d73224aee-kube-api-access-2mcw8\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062246 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8xj6\" (UniqueName: \"kubernetes.io/projected/37203e5a-c71e-4397-b96c-8f152834e488-kube-api-access-m8xj6\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062255 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72f3b1cb-9933-4c78-ad7a-d27f873da187-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.062263 4756 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d6cddd35-757a-487a-afb5-d75d73224aee-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.089737 4756 scope.go:117] "RemoveContainer" containerID="e838cbd6766b2c99560969729ae44daa737392a7df896669e4932f36912fc987" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.091739 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35b0d395-c4ca-4956-9f3e-134814838598-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35b0d395-c4ca-4956-9f3e-134814838598" (UID: "35b0d395-c4ca-4956-9f3e-134814838598"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.103031 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/361ccc2b-5e39-4a12-ae52-3926f47f097d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "361ccc2b-5e39-4a12-ae52-3926f47f097d" (UID: "361ccc2b-5e39-4a12-ae52-3926f47f097d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.109596 4756 scope.go:117] "RemoveContainer" containerID="55c795da3922d7d540bd077dedfdbcba327682d286428ae5b450626fcfd69cf5" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.115573 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c971a632-2e34-4783-bb0a-9e516fb8bdbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c971a632-2e34-4783-bb0a-9e516fb8bdbd" (UID: "c971a632-2e34-4783-bb0a-9e516fb8bdbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.125728 4756 scope.go:117] "RemoveContainer" containerID="51d89103908260a8c16c9eecc810331441e149265193c48cf640249e782aee9f" Dec 03 10:59:15 crc kubenswrapper[4756]: E1203 10:59:15.126333 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51d89103908260a8c16c9eecc810331441e149265193c48cf640249e782aee9f\": container with ID starting with 51d89103908260a8c16c9eecc810331441e149265193c48cf640249e782aee9f not found: ID does not exist" containerID="51d89103908260a8c16c9eecc810331441e149265193c48cf640249e782aee9f" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.126384 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51d89103908260a8c16c9eecc810331441e149265193c48cf640249e782aee9f"} err="failed to get container status \"51d89103908260a8c16c9eecc810331441e149265193c48cf640249e782aee9f\": rpc error: code = NotFound desc = could not find container \"51d89103908260a8c16c9eecc810331441e149265193c48cf640249e782aee9f\": container with ID starting with 51d89103908260a8c16c9eecc810331441e149265193c48cf640249e782aee9f not found: ID does not exist" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.126415 4756 scope.go:117] "RemoveContainer" containerID="e838cbd6766b2c99560969729ae44daa737392a7df896669e4932f36912fc987" Dec 03 10:59:15 crc kubenswrapper[4756]: E1203 10:59:15.126671 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e838cbd6766b2c99560969729ae44daa737392a7df896669e4932f36912fc987\": container with ID starting with e838cbd6766b2c99560969729ae44daa737392a7df896669e4932f36912fc987 not found: ID does not exist" containerID="e838cbd6766b2c99560969729ae44daa737392a7df896669e4932f36912fc987" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.126699 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e838cbd6766b2c99560969729ae44daa737392a7df896669e4932f36912fc987"} err="failed to get container status \"e838cbd6766b2c99560969729ae44daa737392a7df896669e4932f36912fc987\": rpc error: code = NotFound desc = could not find container \"e838cbd6766b2c99560969729ae44daa737392a7df896669e4932f36912fc987\": container with ID starting with e838cbd6766b2c99560969729ae44daa737392a7df896669e4932f36912fc987 not found: ID does not exist" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.126715 4756 scope.go:117] "RemoveContainer" containerID="55c795da3922d7d540bd077dedfdbcba327682d286428ae5b450626fcfd69cf5" Dec 03 10:59:15 crc kubenswrapper[4756]: E1203 10:59:15.140200 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55c795da3922d7d540bd077dedfdbcba327682d286428ae5b450626fcfd69cf5\": container with ID starting with 55c795da3922d7d540bd077dedfdbcba327682d286428ae5b450626fcfd69cf5 not found: ID does not exist" containerID="55c795da3922d7d540bd077dedfdbcba327682d286428ae5b450626fcfd69cf5" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.140249 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55c795da3922d7d540bd077dedfdbcba327682d286428ae5b450626fcfd69cf5"} err="failed to get container status \"55c795da3922d7d540bd077dedfdbcba327682d286428ae5b450626fcfd69cf5\": rpc error: code = NotFound desc = could not find container \"55c795da3922d7d540bd077dedfdbcba327682d286428ae5b450626fcfd69cf5\": container with ID starting with 55c795da3922d7d540bd077dedfdbcba327682d286428ae5b450626fcfd69cf5 not found: ID does not exist" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.141398 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4514e32-bd45-46cf-b849-42c2d941f777-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4514e32-bd45-46cf-b849-42c2d941f777" (UID: "a4514e32-bd45-46cf-b849-42c2d941f777"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.164726 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/361ccc2b-5e39-4a12-ae52-3926f47f097d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.164758 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35b0d395-c4ca-4956-9f3e-134814838598-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.164768 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4514e32-bd45-46cf-b849-42c2d941f777-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.164777 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c971a632-2e34-4783-bb0a-9e516fb8bdbd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.199786 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r52tb"] Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.222716 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1759c8af-db81-4841-a773-d8e3aaa6d9f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1759c8af-db81-4841-a773-d8e3aaa6d9f2" (UID: "1759c8af-db81-4841-a773-d8e3aaa6d9f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:15 crc kubenswrapper[4756]: W1203 10:59:15.224808 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f5dea91_6dce_4093_a943_05e3359b754d.slice/crio-112d7c2256f51fce1e7028828b5c1de56ec5b6c581d2ce24099548605928c3f3 WatchSource:0}: Error finding container 112d7c2256f51fce1e7028828b5c1de56ec5b6c581d2ce24099548605928c3f3: Status 404 returned error can't find the container with id 112d7c2256f51fce1e7028828b5c1de56ec5b6c581d2ce24099548605928c3f3 Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.232556 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-82dpl"] Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.244640 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-82dpl"] Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.251059 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zc78"] Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.252004 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37203e5a-c71e-4397-b96c-8f152834e488-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37203e5a-c71e-4397-b96c-8f152834e488" (UID: "37203e5a-c71e-4397-b96c-8f152834e488"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.254119 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zc78"] Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.266509 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1759c8af-db81-4841-a773-d8e3aaa6d9f2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.266545 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37203e5a-c71e-4397-b96c-8f152834e488-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.886125 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5r5m9" event={"ID":"361ccc2b-5e39-4a12-ae52-3926f47f097d","Type":"ContainerDied","Data":"91e45115cbad4726905b52348ae1ac47e14fc63ce22d4c2201d4c5ea523e78fa"} Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.886486 4756 scope.go:117] "RemoveContainer" containerID="25aa9d61519277f71e2276ce680f0ca9a597bcf012da4e9eb16df40300683ce4" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.886182 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5r5m9" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.890233 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" event={"ID":"d6cddd35-757a-487a-afb5-d75d73224aee","Type":"ContainerDied","Data":"4397ec5631f399e2bd421be9641fa5960124d5bc272cd827cb1c630777731080"} Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.890348 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7rfwn" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.895061 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9zs5t" event={"ID":"1759c8af-db81-4841-a773-d8e3aaa6d9f2","Type":"ContainerDied","Data":"6b5b613bc49f437352e5ef605b6ca13416c3080f23d563535390276bed925230"} Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.895074 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9zs5t" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.899637 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghq75" event={"ID":"37203e5a-c71e-4397-b96c-8f152834e488","Type":"ContainerDied","Data":"b4fe77b862cb5dfbee31840bb5de2187642d69e64cb01fc02fbf24013a9ffd37"} Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.899712 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghq75" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.902204 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-972dg" event={"ID":"a4514e32-bd45-46cf-b849-42c2d941f777","Type":"ContainerDied","Data":"3d95fde0f215bc8fc27c9cee217bb1aa1c29eddda5524eba9a0ece88a36a4183"} Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.902251 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-972dg" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.906771 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2cjlg" event={"ID":"c971a632-2e34-4783-bb0a-9e516fb8bdbd","Type":"ContainerDied","Data":"804ef21924ce6c9a190776d3102aebb2eecd3a13035b352369d8f4a760763e7a"} Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.906835 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2cjlg" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.909873 4756 scope.go:117] "RemoveContainer" containerID="8d3c65453d66d887e9b950b40b7e3ecb06cb5c08b46f8c5e285bac30543a8c7f" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.914376 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" event={"ID":"7f5dea91-6dce-4093-a943-05e3359b754d","Type":"ContainerStarted","Data":"c07acd5bdc7813882ff2f7ce2b38957e3729a29351aacab46f21150c7a6cc442"} Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.914428 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" event={"ID":"7f5dea91-6dce-4093-a943-05e3359b754d","Type":"ContainerStarted","Data":"112d7c2256f51fce1e7028828b5c1de56ec5b6c581d2ce24099548605928c3f3"} Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.915311 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.924923 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.932654 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7rfwn"] Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.937504 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7rfwn"] Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.946683 4756 scope.go:117] "RemoveContainer" containerID="bcab86c31974c5d462b424822ba52ab31329a285563da53053b02d58b39ded35" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.955354 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5r5m9"] Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.962300 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5r5m9"] Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.965663 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9zs5t"] Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.968635 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9zs5t"] Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.970988 4756 scope.go:117] "RemoveContainer" containerID="9edfa61f84a5cf7d6a42baab61887d6d5c2b8705ebbb79acdb7456209cef2f05" Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.977079 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ghq75"] Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.982015 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ghq75"] Dec 03 10:59:15 crc kubenswrapper[4756]: I1203 10:59:15.998138 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-r52tb" podStartSLOduration=1.998110916 podStartE2EDuration="1.998110916s" podCreationTimestamp="2025-12-03 10:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 10:59:15.994643233 +0000 UTC m=+367.024644477" watchObservedRunningTime="2025-12-03 10:59:15.998110916 +0000 UTC m=+367.028112160" Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.007562 4756 scope.go:117] "RemoveContainer" containerID="41481de92048d13956bcd9334b577f1a624a0dcb1a3e1b34f32c02c0a7122788" Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.014770 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-972dg"] Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.019064 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-972dg"] Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.041817 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2cjlg"] Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.041915 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2cjlg"] Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.047313 4756 scope.go:117] "RemoveContainer" containerID="1a614dcc7cf169ace2ff5463d7abb9a2de7dab1ca1ae6a3d78f7578add5462fc" Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.066770 4756 scope.go:117] "RemoveContainer" containerID="e8749b11580d3139bbf8a353b19ef0eaefdff075949832f18f595f4faf0e9b3c" Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.082358 4756 scope.go:117] "RemoveContainer" containerID="c54803b650b98d17e1f0bfcf56b5c06d97c1d82b8b62a4875e7b4542ebf5ceea" Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.100477 4756 scope.go:117] "RemoveContainer" containerID="7382a6c1faad200c4a85ca7ce84a1f76205b4901ab531718524ab29057de1c57" Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.117219 4756 scope.go:117] "RemoveContainer" containerID="eab2eccb8a7be8e821f1039ce35e0c14a7e4f52c6038df272e52c6bb1a7e0d8e" Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.130739 4756 scope.go:117] "RemoveContainer" containerID="abb4cd8b9761781d791d711ad9d8e912bc8d959082b6a14e266247836984bad9" Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.146228 4756 scope.go:117] "RemoveContainer" containerID="491a651623ae7890afca11f840ee6044cdd4806988c6f72aa6c5c293e0030f07" Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.168915 4756 scope.go:117] "RemoveContainer" containerID="4dfdad50965a1d21bd07a7865e20eeb30ff47fe48875315b08a319658ec87823" Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.197639 4756 scope.go:117] "RemoveContainer" containerID="c54f8384f5219529485e29e4d28268c95c4ca10c8df75fbd3c3b3a17705cad55" Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.216346 4756 scope.go:117] "RemoveContainer" containerID="99effb663575bc04506533f8c67e018488e85d036215ed4970fd165da87e52be" Dec 03 10:59:16 crc kubenswrapper[4756]: I1203 10:59:16.237254 4756 scope.go:117] "RemoveContainer" containerID="c09ea5a36d2260eb3394c34a67cb23cddf31963f1ce2926ef2f2428ec27b8ccd" Dec 03 10:59:17 crc kubenswrapper[4756]: I1203 10:59:17.242570 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1759c8af-db81-4841-a773-d8e3aaa6d9f2" path="/var/lib/kubelet/pods/1759c8af-db81-4841-a773-d8e3aaa6d9f2/volumes" Dec 03 10:59:17 crc kubenswrapper[4756]: I1203 10:59:17.243205 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35b0d395-c4ca-4956-9f3e-134814838598" path="/var/lib/kubelet/pods/35b0d395-c4ca-4956-9f3e-134814838598/volumes" Dec 03 10:59:17 crc kubenswrapper[4756]: I1203 10:59:17.243854 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" path="/var/lib/kubelet/pods/361ccc2b-5e39-4a12-ae52-3926f47f097d/volumes" Dec 03 10:59:17 crc kubenswrapper[4756]: I1203 10:59:17.245083 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37203e5a-c71e-4397-b96c-8f152834e488" path="/var/lib/kubelet/pods/37203e5a-c71e-4397-b96c-8f152834e488/volumes" Dec 03 10:59:17 crc kubenswrapper[4756]: I1203 10:59:17.245713 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" path="/var/lib/kubelet/pods/72f3b1cb-9933-4c78-ad7a-d27f873da187/volumes" Dec 03 10:59:17 crc kubenswrapper[4756]: I1203 10:59:17.246792 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4514e32-bd45-46cf-b849-42c2d941f777" path="/var/lib/kubelet/pods/a4514e32-bd45-46cf-b849-42c2d941f777/volumes" Dec 03 10:59:17 crc kubenswrapper[4756]: I1203 10:59:17.247385 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" path="/var/lib/kubelet/pods/c971a632-2e34-4783-bb0a-9e516fb8bdbd/volumes" Dec 03 10:59:17 crc kubenswrapper[4756]: I1203 10:59:17.248504 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6cddd35-757a-487a-afb5-d75d73224aee" path="/var/lib/kubelet/pods/d6cddd35-757a-487a-afb5-d75d73224aee/volumes" Dec 03 10:59:21 crc kubenswrapper[4756]: I1203 10:59:21.511135 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-h2xqh" Dec 03 10:59:21 crc kubenswrapper[4756]: I1203 10:59:21.579524 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mdbh6"] Dec 03 10:59:22 crc kubenswrapper[4756]: I1203 10:59:22.607189 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:59:22 crc kubenswrapper[4756]: I1203 10:59:22.607273 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.050441 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rncfc"] Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051308 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" containerName="extract-utilities" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051322 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" containerName="extract-utilities" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051333 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051339 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051353 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051359 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051369 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" containerName="extract-utilities" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051376 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" containerName="extract-utilities" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051385 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4514e32-bd45-46cf-b849-42c2d941f777" containerName="extract-content" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051391 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4514e32-bd45-46cf-b849-42c2d941f777" containerName="extract-content" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051400 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37203e5a-c71e-4397-b96c-8f152834e488" containerName="extract-utilities" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051408 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="37203e5a-c71e-4397-b96c-8f152834e488" containerName="extract-utilities" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051416 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" containerName="extract-content" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051423 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" containerName="extract-content" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051430 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b0d395-c4ca-4956-9f3e-134814838598" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051436 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b0d395-c4ca-4956-9f3e-134814838598" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051445 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cddd35-757a-487a-afb5-d75d73224aee" containerName="marketplace-operator" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051451 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cddd35-757a-487a-afb5-d75d73224aee" containerName="marketplace-operator" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051462 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b0d395-c4ca-4956-9f3e-134814838598" containerName="extract-content" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051469 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b0d395-c4ca-4956-9f3e-134814838598" containerName="extract-content" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051476 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cddd35-757a-487a-afb5-d75d73224aee" containerName="marketplace-operator" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051483 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cddd35-757a-487a-afb5-d75d73224aee" containerName="marketplace-operator" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051491 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1759c8af-db81-4841-a773-d8e3aaa6d9f2" containerName="extract-content" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051497 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1759c8af-db81-4841-a773-d8e3aaa6d9f2" containerName="extract-content" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051506 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4514e32-bd45-46cf-b849-42c2d941f777" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051512 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4514e32-bd45-46cf-b849-42c2d941f777" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051519 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1759c8af-db81-4841-a773-d8e3aaa6d9f2" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051525 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1759c8af-db81-4841-a773-d8e3aaa6d9f2" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051533 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051541 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051549 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4514e32-bd45-46cf-b849-42c2d941f777" containerName="extract-utilities" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051556 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4514e32-bd45-46cf-b849-42c2d941f777" containerName="extract-utilities" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051562 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37203e5a-c71e-4397-b96c-8f152834e488" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051569 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="37203e5a-c71e-4397-b96c-8f152834e488" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051581 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" containerName="extract-content" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051588 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" containerName="extract-content" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051597 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" containerName="extract-content" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051603 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" containerName="extract-content" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051611 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1759c8af-db81-4841-a773-d8e3aaa6d9f2" containerName="extract-utilities" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051618 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1759c8af-db81-4841-a773-d8e3aaa6d9f2" containerName="extract-utilities" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051624 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37203e5a-c71e-4397-b96c-8f152834e488" containerName="extract-content" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051631 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="37203e5a-c71e-4397-b96c-8f152834e488" containerName="extract-content" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051642 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b0d395-c4ca-4956-9f3e-134814838598" containerName="extract-utilities" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051648 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b0d395-c4ca-4956-9f3e-134814838598" containerName="extract-utilities" Dec 03 10:59:25 crc kubenswrapper[4756]: E1203 10:59:25.051657 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" containerName="extract-utilities" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051664 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" containerName="extract-utilities" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051755 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f3b1cb-9933-4c78-ad7a-d27f873da187" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051766 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1759c8af-db81-4841-a773-d8e3aaa6d9f2" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051775 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="35b0d395-c4ca-4956-9f3e-134814838598" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051785 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cddd35-757a-487a-afb5-d75d73224aee" containerName="marketplace-operator" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051792 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4514e32-bd45-46cf-b849-42c2d941f777" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051802 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c971a632-2e34-4783-bb0a-9e516fb8bdbd" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051810 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="37203e5a-c71e-4397-b96c-8f152834e488" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051819 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cddd35-757a-487a-afb5-d75d73224aee" containerName="marketplace-operator" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.051827 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="361ccc2b-5e39-4a12-ae52-3926f47f097d" containerName="registry-server" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.052623 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.054664 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.069552 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rncfc"] Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.108898 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dad8b84-7bcf-411d-87e7-f91db9494b86-utilities\") pod \"redhat-operators-rncfc\" (UID: \"7dad8b84-7bcf-411d-87e7-f91db9494b86\") " pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.109288 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dad8b84-7bcf-411d-87e7-f91db9494b86-catalog-content\") pod \"redhat-operators-rncfc\" (UID: \"7dad8b84-7bcf-411d-87e7-f91db9494b86\") " pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.109456 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr82s\" (UniqueName: \"kubernetes.io/projected/7dad8b84-7bcf-411d-87e7-f91db9494b86-kube-api-access-vr82s\") pod \"redhat-operators-rncfc\" (UID: \"7dad8b84-7bcf-411d-87e7-f91db9494b86\") " pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.211459 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dad8b84-7bcf-411d-87e7-f91db9494b86-utilities\") pod \"redhat-operators-rncfc\" (UID: \"7dad8b84-7bcf-411d-87e7-f91db9494b86\") " pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.211904 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dad8b84-7bcf-411d-87e7-f91db9494b86-catalog-content\") pod \"redhat-operators-rncfc\" (UID: \"7dad8b84-7bcf-411d-87e7-f91db9494b86\") " pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.212079 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr82s\" (UniqueName: \"kubernetes.io/projected/7dad8b84-7bcf-411d-87e7-f91db9494b86-kube-api-access-vr82s\") pod \"redhat-operators-rncfc\" (UID: \"7dad8b84-7bcf-411d-87e7-f91db9494b86\") " pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.212195 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dad8b84-7bcf-411d-87e7-f91db9494b86-utilities\") pod \"redhat-operators-rncfc\" (UID: \"7dad8b84-7bcf-411d-87e7-f91db9494b86\") " pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.212441 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dad8b84-7bcf-411d-87e7-f91db9494b86-catalog-content\") pod \"redhat-operators-rncfc\" (UID: \"7dad8b84-7bcf-411d-87e7-f91db9494b86\") " pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.237638 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr82s\" (UniqueName: \"kubernetes.io/projected/7dad8b84-7bcf-411d-87e7-f91db9494b86-kube-api-access-vr82s\") pod \"redhat-operators-rncfc\" (UID: \"7dad8b84-7bcf-411d-87e7-f91db9494b86\") " pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.388200 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.617590 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rncfc"] Dec 03 10:59:25 crc kubenswrapper[4756]: I1203 10:59:25.992126 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rncfc" event={"ID":"7dad8b84-7bcf-411d-87e7-f91db9494b86","Type":"ContainerStarted","Data":"ff67edcf34116bdbf4778168a38bbd1fda9ff6a2b342f208f2d47cda452a93d7"} Dec 03 10:59:26 crc kubenswrapper[4756]: I1203 10:59:26.449336 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fqt2k"] Dec 03 10:59:26 crc kubenswrapper[4756]: I1203 10:59:26.451684 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:26 crc kubenswrapper[4756]: I1203 10:59:26.455501 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 10:59:26 crc kubenswrapper[4756]: I1203 10:59:26.460167 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fqt2k"] Dec 03 10:59:26 crc kubenswrapper[4756]: I1203 10:59:26.534907 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh2w2\" (UniqueName: \"kubernetes.io/projected/24f50f1e-1793-40ca-b105-31910761c4ed-kube-api-access-sh2w2\") pod \"certified-operators-fqt2k\" (UID: \"24f50f1e-1793-40ca-b105-31910761c4ed\") " pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:26 crc kubenswrapper[4756]: I1203 10:59:26.535123 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24f50f1e-1793-40ca-b105-31910761c4ed-catalog-content\") pod \"certified-operators-fqt2k\" (UID: \"24f50f1e-1793-40ca-b105-31910761c4ed\") " pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:26 crc kubenswrapper[4756]: I1203 10:59:26.535271 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24f50f1e-1793-40ca-b105-31910761c4ed-utilities\") pod \"certified-operators-fqt2k\" (UID: \"24f50f1e-1793-40ca-b105-31910761c4ed\") " pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:26 crc kubenswrapper[4756]: I1203 10:59:26.636858 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh2w2\" (UniqueName: \"kubernetes.io/projected/24f50f1e-1793-40ca-b105-31910761c4ed-kube-api-access-sh2w2\") pod \"certified-operators-fqt2k\" (UID: \"24f50f1e-1793-40ca-b105-31910761c4ed\") " pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:26 crc kubenswrapper[4756]: I1203 10:59:26.636928 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24f50f1e-1793-40ca-b105-31910761c4ed-catalog-content\") pod \"certified-operators-fqt2k\" (UID: \"24f50f1e-1793-40ca-b105-31910761c4ed\") " pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:26 crc kubenswrapper[4756]: I1203 10:59:26.636986 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24f50f1e-1793-40ca-b105-31910761c4ed-utilities\") pod \"certified-operators-fqt2k\" (UID: \"24f50f1e-1793-40ca-b105-31910761c4ed\") " pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:26 crc kubenswrapper[4756]: I1203 10:59:26.637575 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24f50f1e-1793-40ca-b105-31910761c4ed-utilities\") pod \"certified-operators-fqt2k\" (UID: \"24f50f1e-1793-40ca-b105-31910761c4ed\") " pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:26 crc kubenswrapper[4756]: I1203 10:59:26.637614 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24f50f1e-1793-40ca-b105-31910761c4ed-catalog-content\") pod \"certified-operators-fqt2k\" (UID: \"24f50f1e-1793-40ca-b105-31910761c4ed\") " pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:26 crc kubenswrapper[4756]: I1203 10:59:26.658601 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh2w2\" (UniqueName: \"kubernetes.io/projected/24f50f1e-1793-40ca-b105-31910761c4ed-kube-api-access-sh2w2\") pod \"certified-operators-fqt2k\" (UID: \"24f50f1e-1793-40ca-b105-31910761c4ed\") " pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:26 crc kubenswrapper[4756]: I1203 10:59:26.769830 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.003181 4756 generic.go:334] "Generic (PLEG): container finished" podID="7dad8b84-7bcf-411d-87e7-f91db9494b86" containerID="ed4df6e1a76932a76b2fa0d6c26607ebaa0c91b2910d7f81c4df5d619dea1b95" exitCode=0 Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.003299 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rncfc" event={"ID":"7dad8b84-7bcf-411d-87e7-f91db9494b86","Type":"ContainerDied","Data":"ed4df6e1a76932a76b2fa0d6c26607ebaa0c91b2910d7f81c4df5d619dea1b95"} Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.175139 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fqt2k"] Dec 03 10:59:27 crc kubenswrapper[4756]: W1203 10:59:27.180500 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24f50f1e_1793_40ca_b105_31910761c4ed.slice/crio-1a44500d43aafd1b4c4ea4c278108e4ffc70a29b61fe25a37f99f714f27561dd WatchSource:0}: Error finding container 1a44500d43aafd1b4c4ea4c278108e4ffc70a29b61fe25a37f99f714f27561dd: Status 404 returned error can't find the container with id 1a44500d43aafd1b4c4ea4c278108e4ffc70a29b61fe25a37f99f714f27561dd Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.448834 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t6jxs"] Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.450535 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.453661 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.462365 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t6jxs"] Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.551367 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cb28c3f-5e91-4b53-8eb9-7878c29595a2-utilities\") pod \"community-operators-t6jxs\" (UID: \"6cb28c3f-5e91-4b53-8eb9-7878c29595a2\") " pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.551425 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58zll\" (UniqueName: \"kubernetes.io/projected/6cb28c3f-5e91-4b53-8eb9-7878c29595a2-kube-api-access-58zll\") pod \"community-operators-t6jxs\" (UID: \"6cb28c3f-5e91-4b53-8eb9-7878c29595a2\") " pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.551470 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cb28c3f-5e91-4b53-8eb9-7878c29595a2-catalog-content\") pod \"community-operators-t6jxs\" (UID: \"6cb28c3f-5e91-4b53-8eb9-7878c29595a2\") " pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.653279 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cb28c3f-5e91-4b53-8eb9-7878c29595a2-utilities\") pod \"community-operators-t6jxs\" (UID: \"6cb28c3f-5e91-4b53-8eb9-7878c29595a2\") " pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.653345 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58zll\" (UniqueName: \"kubernetes.io/projected/6cb28c3f-5e91-4b53-8eb9-7878c29595a2-kube-api-access-58zll\") pod \"community-operators-t6jxs\" (UID: \"6cb28c3f-5e91-4b53-8eb9-7878c29595a2\") " pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.653394 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cb28c3f-5e91-4b53-8eb9-7878c29595a2-catalog-content\") pod \"community-operators-t6jxs\" (UID: \"6cb28c3f-5e91-4b53-8eb9-7878c29595a2\") " pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.654080 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cb28c3f-5e91-4b53-8eb9-7878c29595a2-utilities\") pod \"community-operators-t6jxs\" (UID: \"6cb28c3f-5e91-4b53-8eb9-7878c29595a2\") " pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.654790 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cb28c3f-5e91-4b53-8eb9-7878c29595a2-catalog-content\") pod \"community-operators-t6jxs\" (UID: \"6cb28c3f-5e91-4b53-8eb9-7878c29595a2\") " pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.680141 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58zll\" (UniqueName: \"kubernetes.io/projected/6cb28c3f-5e91-4b53-8eb9-7878c29595a2-kube-api-access-58zll\") pod \"community-operators-t6jxs\" (UID: \"6cb28c3f-5e91-4b53-8eb9-7878c29595a2\") " pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:27 crc kubenswrapper[4756]: I1203 10:59:27.777878 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:28 crc kubenswrapper[4756]: I1203 10:59:28.012158 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rncfc" event={"ID":"7dad8b84-7bcf-411d-87e7-f91db9494b86","Type":"ContainerStarted","Data":"f07720876f74e4c4f3abc041720795199ba6ce382e938b5e8b1f0fb067503426"} Dec 03 10:59:28 crc kubenswrapper[4756]: I1203 10:59:28.013519 4756 generic.go:334] "Generic (PLEG): container finished" podID="24f50f1e-1793-40ca-b105-31910761c4ed" containerID="0b204130c7a7d959e7d46b07db1f929deb0bdad806c18c9d3dd19e460c12a016" exitCode=0 Dec 03 10:59:28 crc kubenswrapper[4756]: I1203 10:59:28.013559 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqt2k" event={"ID":"24f50f1e-1793-40ca-b105-31910761c4ed","Type":"ContainerDied","Data":"0b204130c7a7d959e7d46b07db1f929deb0bdad806c18c9d3dd19e460c12a016"} Dec 03 10:59:28 crc kubenswrapper[4756]: I1203 10:59:28.013576 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqt2k" event={"ID":"24f50f1e-1793-40ca-b105-31910761c4ed","Type":"ContainerStarted","Data":"1a44500d43aafd1b4c4ea4c278108e4ffc70a29b61fe25a37f99f714f27561dd"} Dec 03 10:59:28 crc kubenswrapper[4756]: I1203 10:59:28.190419 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t6jxs"] Dec 03 10:59:28 crc kubenswrapper[4756]: I1203 10:59:28.845317 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d88tf"] Dec 03 10:59:28 crc kubenswrapper[4756]: I1203 10:59:28.846419 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:28 crc kubenswrapper[4756]: I1203 10:59:28.851247 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 10:59:28 crc kubenswrapper[4756]: I1203 10:59:28.856864 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d88tf"] Dec 03 10:59:28 crc kubenswrapper[4756]: I1203 10:59:28.977599 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98d8f022-96b9-4992-847a-52bf83ddb778-catalog-content\") pod \"redhat-marketplace-d88tf\" (UID: \"98d8f022-96b9-4992-847a-52bf83ddb778\") " pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:28 crc kubenswrapper[4756]: I1203 10:59:28.977672 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98d8f022-96b9-4992-847a-52bf83ddb778-utilities\") pod \"redhat-marketplace-d88tf\" (UID: \"98d8f022-96b9-4992-847a-52bf83ddb778\") " pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:28 crc kubenswrapper[4756]: I1203 10:59:28.978171 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-625qh\" (UniqueName: \"kubernetes.io/projected/98d8f022-96b9-4992-847a-52bf83ddb778-kube-api-access-625qh\") pod \"redhat-marketplace-d88tf\" (UID: \"98d8f022-96b9-4992-847a-52bf83ddb778\") " pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:29 crc kubenswrapper[4756]: I1203 10:59:29.022346 4756 generic.go:334] "Generic (PLEG): container finished" podID="7dad8b84-7bcf-411d-87e7-f91db9494b86" containerID="f07720876f74e4c4f3abc041720795199ba6ce382e938b5e8b1f0fb067503426" exitCode=0 Dec 03 10:59:29 crc kubenswrapper[4756]: I1203 10:59:29.022544 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rncfc" event={"ID":"7dad8b84-7bcf-411d-87e7-f91db9494b86","Type":"ContainerDied","Data":"f07720876f74e4c4f3abc041720795199ba6ce382e938b5e8b1f0fb067503426"} Dec 03 10:59:29 crc kubenswrapper[4756]: I1203 10:59:29.028879 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqt2k" event={"ID":"24f50f1e-1793-40ca-b105-31910761c4ed","Type":"ContainerStarted","Data":"c9da5abf19aa10ac30dd4f4348881a67fb195a38389a0cf42f0bd9d5c072daa5"} Dec 03 10:59:29 crc kubenswrapper[4756]: I1203 10:59:29.031112 4756 generic.go:334] "Generic (PLEG): container finished" podID="6cb28c3f-5e91-4b53-8eb9-7878c29595a2" containerID="222e390a7391bae5534c9a9469a7ebf6e6db6b747ae2ae6550eba7c29d53c266" exitCode=0 Dec 03 10:59:29 crc kubenswrapper[4756]: I1203 10:59:29.031162 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6jxs" event={"ID":"6cb28c3f-5e91-4b53-8eb9-7878c29595a2","Type":"ContainerDied","Data":"222e390a7391bae5534c9a9469a7ebf6e6db6b747ae2ae6550eba7c29d53c266"} Dec 03 10:59:29 crc kubenswrapper[4756]: I1203 10:59:29.031182 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6jxs" event={"ID":"6cb28c3f-5e91-4b53-8eb9-7878c29595a2","Type":"ContainerStarted","Data":"60180ed3083f6efb838dda0c91bd869f6e17927e694af2d9d9305fde4771778c"} Dec 03 10:59:29 crc kubenswrapper[4756]: I1203 10:59:29.079893 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-625qh\" (UniqueName: \"kubernetes.io/projected/98d8f022-96b9-4992-847a-52bf83ddb778-kube-api-access-625qh\") pod \"redhat-marketplace-d88tf\" (UID: \"98d8f022-96b9-4992-847a-52bf83ddb778\") " pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:29 crc kubenswrapper[4756]: I1203 10:59:29.079985 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98d8f022-96b9-4992-847a-52bf83ddb778-catalog-content\") pod \"redhat-marketplace-d88tf\" (UID: \"98d8f022-96b9-4992-847a-52bf83ddb778\") " pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:29 crc kubenswrapper[4756]: I1203 10:59:29.080018 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98d8f022-96b9-4992-847a-52bf83ddb778-utilities\") pod \"redhat-marketplace-d88tf\" (UID: \"98d8f022-96b9-4992-847a-52bf83ddb778\") " pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:29 crc kubenswrapper[4756]: I1203 10:59:29.081088 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98d8f022-96b9-4992-847a-52bf83ddb778-catalog-content\") pod \"redhat-marketplace-d88tf\" (UID: \"98d8f022-96b9-4992-847a-52bf83ddb778\") " pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:29 crc kubenswrapper[4756]: I1203 10:59:29.081219 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98d8f022-96b9-4992-847a-52bf83ddb778-utilities\") pod \"redhat-marketplace-d88tf\" (UID: \"98d8f022-96b9-4992-847a-52bf83ddb778\") " pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:29 crc kubenswrapper[4756]: I1203 10:59:29.114975 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-625qh\" (UniqueName: \"kubernetes.io/projected/98d8f022-96b9-4992-847a-52bf83ddb778-kube-api-access-625qh\") pod \"redhat-marketplace-d88tf\" (UID: \"98d8f022-96b9-4992-847a-52bf83ddb778\") " pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:29 crc kubenswrapper[4756]: I1203 10:59:29.188398 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:29 crc kubenswrapper[4756]: I1203 10:59:29.605970 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d88tf"] Dec 03 10:59:29 crc kubenswrapper[4756]: W1203 10:59:29.620989 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98d8f022_96b9_4992_847a_52bf83ddb778.slice/crio-f98d033e9cc9d294cb3b566d4e103e73dbfaa9b5419c63203a2fc11e338ea4a5 WatchSource:0}: Error finding container f98d033e9cc9d294cb3b566d4e103e73dbfaa9b5419c63203a2fc11e338ea4a5: Status 404 returned error can't find the container with id f98d033e9cc9d294cb3b566d4e103e73dbfaa9b5419c63203a2fc11e338ea4a5 Dec 03 10:59:30 crc kubenswrapper[4756]: I1203 10:59:30.047890 4756 generic.go:334] "Generic (PLEG): container finished" podID="24f50f1e-1793-40ca-b105-31910761c4ed" containerID="c9da5abf19aa10ac30dd4f4348881a67fb195a38389a0cf42f0bd9d5c072daa5" exitCode=0 Dec 03 10:59:30 crc kubenswrapper[4756]: I1203 10:59:30.048097 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqt2k" event={"ID":"24f50f1e-1793-40ca-b105-31910761c4ed","Type":"ContainerDied","Data":"c9da5abf19aa10ac30dd4f4348881a67fb195a38389a0cf42f0bd9d5c072daa5"} Dec 03 10:59:30 crc kubenswrapper[4756]: I1203 10:59:30.060012 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d88tf" event={"ID":"98d8f022-96b9-4992-847a-52bf83ddb778","Type":"ContainerStarted","Data":"1de8b6c9d46fc0b142514b752df405ba37c93994ffc860a690e3c211a60b35ac"} Dec 03 10:59:30 crc kubenswrapper[4756]: I1203 10:59:30.060081 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d88tf" event={"ID":"98d8f022-96b9-4992-847a-52bf83ddb778","Type":"ContainerStarted","Data":"f98d033e9cc9d294cb3b566d4e103e73dbfaa9b5419c63203a2fc11e338ea4a5"} Dec 03 10:59:30 crc kubenswrapper[4756]: I1203 10:59:30.066013 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rncfc" event={"ID":"7dad8b84-7bcf-411d-87e7-f91db9494b86","Type":"ContainerStarted","Data":"f050c076e59682e74ffe25d22cea6eced256d8a694fc094743052d7509afca69"} Dec 03 10:59:31 crc kubenswrapper[4756]: I1203 10:59:31.074128 4756 generic.go:334] "Generic (PLEG): container finished" podID="98d8f022-96b9-4992-847a-52bf83ddb778" containerID="1de8b6c9d46fc0b142514b752df405ba37c93994ffc860a690e3c211a60b35ac" exitCode=0 Dec 03 10:59:31 crc kubenswrapper[4756]: I1203 10:59:31.074258 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d88tf" event={"ID":"98d8f022-96b9-4992-847a-52bf83ddb778","Type":"ContainerDied","Data":"1de8b6c9d46fc0b142514b752df405ba37c93994ffc860a690e3c211a60b35ac"} Dec 03 10:59:31 crc kubenswrapper[4756]: I1203 10:59:31.077391 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqt2k" event={"ID":"24f50f1e-1793-40ca-b105-31910761c4ed","Type":"ContainerStarted","Data":"2f6416d9132cfb27d34443f65b2b1ac44336611b1bb84e2efc4dda897b9cebe5"} Dec 03 10:59:31 crc kubenswrapper[4756]: I1203 10:59:31.081487 4756 generic.go:334] "Generic (PLEG): container finished" podID="6cb28c3f-5e91-4b53-8eb9-7878c29595a2" containerID="bd07084e0dc6e88a60bd77f7012dfa37ae1e9fb8beec683110decadfde071c81" exitCode=0 Dec 03 10:59:31 crc kubenswrapper[4756]: I1203 10:59:31.081579 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6jxs" event={"ID":"6cb28c3f-5e91-4b53-8eb9-7878c29595a2","Type":"ContainerDied","Data":"bd07084e0dc6e88a60bd77f7012dfa37ae1e9fb8beec683110decadfde071c81"} Dec 03 10:59:31 crc kubenswrapper[4756]: I1203 10:59:31.097764 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rncfc" podStartSLOduration=3.6895596790000003 podStartE2EDuration="6.097743832s" podCreationTimestamp="2025-12-03 10:59:25 +0000 UTC" firstStartedPulling="2025-12-03 10:59:27.006474162 +0000 UTC m=+378.036475426" lastFinishedPulling="2025-12-03 10:59:29.414658335 +0000 UTC m=+380.444659579" observedRunningTime="2025-12-03 10:59:30.121198516 +0000 UTC m=+381.151199780" watchObservedRunningTime="2025-12-03 10:59:31.097743832 +0000 UTC m=+382.127745076" Dec 03 10:59:32 crc kubenswrapper[4756]: I1203 10:59:32.092780 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6jxs" event={"ID":"6cb28c3f-5e91-4b53-8eb9-7878c29595a2","Type":"ContainerStarted","Data":"b78a214facfa3ca740835ebdc9a4a95b85b05b5e2c1b507fc281ef03ed4a43f1"} Dec 03 10:59:32 crc kubenswrapper[4756]: I1203 10:59:32.096584 4756 generic.go:334] "Generic (PLEG): container finished" podID="98d8f022-96b9-4992-847a-52bf83ddb778" containerID="2f79ace605e2e44bb65dcca4c8bac82e415df78fdec06394b78b99d31c51f83f" exitCode=0 Dec 03 10:59:32 crc kubenswrapper[4756]: I1203 10:59:32.096783 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d88tf" event={"ID":"98d8f022-96b9-4992-847a-52bf83ddb778","Type":"ContainerDied","Data":"2f79ace605e2e44bb65dcca4c8bac82e415df78fdec06394b78b99d31c51f83f"} Dec 03 10:59:32 crc kubenswrapper[4756]: I1203 10:59:32.119865 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t6jxs" podStartSLOduration=2.6006132600000003 podStartE2EDuration="5.11984293s" podCreationTimestamp="2025-12-03 10:59:27 +0000 UTC" firstStartedPulling="2025-12-03 10:59:29.033916504 +0000 UTC m=+380.063917748" lastFinishedPulling="2025-12-03 10:59:31.553146174 +0000 UTC m=+382.583147418" observedRunningTime="2025-12-03 10:59:32.113814005 +0000 UTC m=+383.143815249" watchObservedRunningTime="2025-12-03 10:59:32.11984293 +0000 UTC m=+383.149844164" Dec 03 10:59:32 crc kubenswrapper[4756]: I1203 10:59:32.121354 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fqt2k" podStartSLOduration=3.478335489 podStartE2EDuration="6.1213478s" podCreationTimestamp="2025-12-03 10:59:26 +0000 UTC" firstStartedPulling="2025-12-03 10:59:28.015087322 +0000 UTC m=+379.045088566" lastFinishedPulling="2025-12-03 10:59:30.658099633 +0000 UTC m=+381.688100877" observedRunningTime="2025-12-03 10:59:31.151375078 +0000 UTC m=+382.181376332" watchObservedRunningTime="2025-12-03 10:59:32.1213478 +0000 UTC m=+383.151349044" Dec 03 10:59:34 crc kubenswrapper[4756]: I1203 10:59:34.109833 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d88tf" event={"ID":"98d8f022-96b9-4992-847a-52bf83ddb778","Type":"ContainerStarted","Data":"a1048ba34bbd794794f2e9fdc29e7997bd4c3a79bb3b3ee3d495302c457a75ec"} Dec 03 10:59:34 crc kubenswrapper[4756]: I1203 10:59:34.133562 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d88tf" podStartSLOduration=3.739533104 podStartE2EDuration="6.133542356s" podCreationTimestamp="2025-12-03 10:59:28 +0000 UTC" firstStartedPulling="2025-12-03 10:59:31.076039084 +0000 UTC m=+382.106040338" lastFinishedPulling="2025-12-03 10:59:33.470048346 +0000 UTC m=+384.500049590" observedRunningTime="2025-12-03 10:59:34.128161831 +0000 UTC m=+385.158163075" watchObservedRunningTime="2025-12-03 10:59:34.133542356 +0000 UTC m=+385.163543600" Dec 03 10:59:35 crc kubenswrapper[4756]: I1203 10:59:35.389174 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:35 crc kubenswrapper[4756]: I1203 10:59:35.389515 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:35 crc kubenswrapper[4756]: I1203 10:59:35.435582 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:36 crc kubenswrapper[4756]: I1203 10:59:36.169658 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rncfc" Dec 03 10:59:36 crc kubenswrapper[4756]: I1203 10:59:36.770121 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:36 crc kubenswrapper[4756]: I1203 10:59:36.770189 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:36 crc kubenswrapper[4756]: I1203 10:59:36.819573 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:37 crc kubenswrapper[4756]: I1203 10:59:37.170846 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 10:59:37 crc kubenswrapper[4756]: I1203 10:59:37.778997 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:37 crc kubenswrapper[4756]: I1203 10:59:37.779052 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:37 crc kubenswrapper[4756]: I1203 10:59:37.825262 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:38 crc kubenswrapper[4756]: I1203 10:59:38.202607 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t6jxs" Dec 03 10:59:39 crc kubenswrapper[4756]: I1203 10:59:39.189096 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:39 crc kubenswrapper[4756]: I1203 10:59:39.189450 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:39 crc kubenswrapper[4756]: I1203 10:59:39.232306 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:40 crc kubenswrapper[4756]: I1203 10:59:40.202970 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d88tf" Dec 03 10:59:46 crc kubenswrapper[4756]: I1203 10:59:46.623921 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" podUID="96867856-6fdb-4b8a-b19e-54bc30bcc607" containerName="registry" containerID="cri-o://86590e4837a1a00770c1b148d83ae428cec8595caeeb08938267cb99ccc1fc08" gracePeriod=30 Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.199835 4756 generic.go:334] "Generic (PLEG): container finished" podID="96867856-6fdb-4b8a-b19e-54bc30bcc607" containerID="86590e4837a1a00770c1b148d83ae428cec8595caeeb08938267cb99ccc1fc08" exitCode=0 Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.199927 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" event={"ID":"96867856-6fdb-4b8a-b19e-54bc30bcc607","Type":"ContainerDied","Data":"86590e4837a1a00770c1b148d83ae428cec8595caeeb08938267cb99ccc1fc08"} Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.200193 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" event={"ID":"96867856-6fdb-4b8a-b19e-54bc30bcc607","Type":"ContainerDied","Data":"630c33837cbd642affa9fc29651918cfcc167d785551eee9f3d06af48d32932c"} Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.200206 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="630c33837cbd642affa9fc29651918cfcc167d785551eee9f3d06af48d32932c" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.207512 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.306676 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-registry-tls\") pod \"96867856-6fdb-4b8a-b19e-54bc30bcc607\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.306962 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-bound-sa-token\") pod \"96867856-6fdb-4b8a-b19e-54bc30bcc607\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.307201 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"96867856-6fdb-4b8a-b19e-54bc30bcc607\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.307329 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/96867856-6fdb-4b8a-b19e-54bc30bcc607-ca-trust-extracted\") pod \"96867856-6fdb-4b8a-b19e-54bc30bcc607\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.307397 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96867856-6fdb-4b8a-b19e-54bc30bcc607-trusted-ca\") pod \"96867856-6fdb-4b8a-b19e-54bc30bcc607\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.307467 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/96867856-6fdb-4b8a-b19e-54bc30bcc607-installation-pull-secrets\") pod \"96867856-6fdb-4b8a-b19e-54bc30bcc607\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.307541 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/96867856-6fdb-4b8a-b19e-54bc30bcc607-registry-certificates\") pod \"96867856-6fdb-4b8a-b19e-54bc30bcc607\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.307577 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skmw6\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-kube-api-access-skmw6\") pod \"96867856-6fdb-4b8a-b19e-54bc30bcc607\" (UID: \"96867856-6fdb-4b8a-b19e-54bc30bcc607\") " Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.309934 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96867856-6fdb-4b8a-b19e-54bc30bcc607-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "96867856-6fdb-4b8a-b19e-54bc30bcc607" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.310724 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96867856-6fdb-4b8a-b19e-54bc30bcc607-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "96867856-6fdb-4b8a-b19e-54bc30bcc607" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.313827 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-kube-api-access-skmw6" (OuterVolumeSpecName: "kube-api-access-skmw6") pod "96867856-6fdb-4b8a-b19e-54bc30bcc607" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607"). InnerVolumeSpecName "kube-api-access-skmw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.313943 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96867856-6fdb-4b8a-b19e-54bc30bcc607-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "96867856-6fdb-4b8a-b19e-54bc30bcc607" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.314564 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "96867856-6fdb-4b8a-b19e-54bc30bcc607" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.320821 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "96867856-6fdb-4b8a-b19e-54bc30bcc607" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.321421 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "96867856-6fdb-4b8a-b19e-54bc30bcc607" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.336461 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96867856-6fdb-4b8a-b19e-54bc30bcc607-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "96867856-6fdb-4b8a-b19e-54bc30bcc607" (UID: "96867856-6fdb-4b8a-b19e-54bc30bcc607"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.409687 4756 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/96867856-6fdb-4b8a-b19e-54bc30bcc607-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.409750 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96867856-6fdb-4b8a-b19e-54bc30bcc607-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.409771 4756 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/96867856-6fdb-4b8a-b19e-54bc30bcc607-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.409797 4756 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/96867856-6fdb-4b8a-b19e-54bc30bcc607-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.409819 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skmw6\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-kube-api-access-skmw6\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.409840 4756 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:48 crc kubenswrapper[4756]: I1203 10:59:48.409857 4756 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96867856-6fdb-4b8a-b19e-54bc30bcc607-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 10:59:49 crc kubenswrapper[4756]: I1203 10:59:49.207494 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mdbh6" Dec 03 10:59:49 crc kubenswrapper[4756]: I1203 10:59:49.277081 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mdbh6"] Dec 03 10:59:49 crc kubenswrapper[4756]: I1203 10:59:49.284475 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mdbh6"] Dec 03 10:59:51 crc kubenswrapper[4756]: I1203 10:59:51.245361 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96867856-6fdb-4b8a-b19e-54bc30bcc607" path="/var/lib/kubelet/pods/96867856-6fdb-4b8a-b19e-54bc30bcc607/volumes" Dec 03 10:59:52 crc kubenswrapper[4756]: I1203 10:59:52.607513 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 10:59:52 crc kubenswrapper[4756]: I1203 10:59:52.607598 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.192008 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff"] Dec 03 11:00:00 crc kubenswrapper[4756]: E1203 11:00:00.193436 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96867856-6fdb-4b8a-b19e-54bc30bcc607" containerName="registry" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.193462 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="96867856-6fdb-4b8a-b19e-54bc30bcc607" containerName="registry" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.193687 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="96867856-6fdb-4b8a-b19e-54bc30bcc607" containerName="registry" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.194637 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.197304 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.198152 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.203976 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff"] Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.291565 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e110b86a-f193-448b-bd73-1babbc0b175b-secret-volume\") pod \"collect-profiles-29412660-x4cff\" (UID: \"e110b86a-f193-448b-bd73-1babbc0b175b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.291637 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dtlj\" (UniqueName: \"kubernetes.io/projected/e110b86a-f193-448b-bd73-1babbc0b175b-kube-api-access-8dtlj\") pod \"collect-profiles-29412660-x4cff\" (UID: \"e110b86a-f193-448b-bd73-1babbc0b175b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.291901 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e110b86a-f193-448b-bd73-1babbc0b175b-config-volume\") pod \"collect-profiles-29412660-x4cff\" (UID: \"e110b86a-f193-448b-bd73-1babbc0b175b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.393982 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e110b86a-f193-448b-bd73-1babbc0b175b-secret-volume\") pod \"collect-profiles-29412660-x4cff\" (UID: \"e110b86a-f193-448b-bd73-1babbc0b175b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.394392 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dtlj\" (UniqueName: \"kubernetes.io/projected/e110b86a-f193-448b-bd73-1babbc0b175b-kube-api-access-8dtlj\") pod \"collect-profiles-29412660-x4cff\" (UID: \"e110b86a-f193-448b-bd73-1babbc0b175b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.394604 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e110b86a-f193-448b-bd73-1babbc0b175b-config-volume\") pod \"collect-profiles-29412660-x4cff\" (UID: \"e110b86a-f193-448b-bd73-1babbc0b175b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.395413 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e110b86a-f193-448b-bd73-1babbc0b175b-config-volume\") pod \"collect-profiles-29412660-x4cff\" (UID: \"e110b86a-f193-448b-bd73-1babbc0b175b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.400914 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e110b86a-f193-448b-bd73-1babbc0b175b-secret-volume\") pod \"collect-profiles-29412660-x4cff\" (UID: \"e110b86a-f193-448b-bd73-1babbc0b175b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.411159 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dtlj\" (UniqueName: \"kubernetes.io/projected/e110b86a-f193-448b-bd73-1babbc0b175b-kube-api-access-8dtlj\") pod \"collect-profiles-29412660-x4cff\" (UID: \"e110b86a-f193-448b-bd73-1babbc0b175b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.517927 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" Dec 03 11:00:00 crc kubenswrapper[4756]: I1203 11:00:00.939531 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff"] Dec 03 11:00:01 crc kubenswrapper[4756]: I1203 11:00:01.287834 4756 generic.go:334] "Generic (PLEG): container finished" podID="e110b86a-f193-448b-bd73-1babbc0b175b" containerID="bdbe457fa16f76a6dbbb6c839321592c22d89fb37aa4b11762951bc5da0e2f0c" exitCode=0 Dec 03 11:00:01 crc kubenswrapper[4756]: I1203 11:00:01.287988 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" event={"ID":"e110b86a-f193-448b-bd73-1babbc0b175b","Type":"ContainerDied","Data":"bdbe457fa16f76a6dbbb6c839321592c22d89fb37aa4b11762951bc5da0e2f0c"} Dec 03 11:00:01 crc kubenswrapper[4756]: I1203 11:00:01.288211 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" event={"ID":"e110b86a-f193-448b-bd73-1babbc0b175b","Type":"ContainerStarted","Data":"9022ee0b2700e557673c0953c3e420aa229458c448393c07ccc06f6c1750154a"} Dec 03 11:00:02 crc kubenswrapper[4756]: I1203 11:00:02.520880 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" Dec 03 11:00:02 crc kubenswrapper[4756]: I1203 11:00:02.623058 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e110b86a-f193-448b-bd73-1babbc0b175b-secret-volume\") pod \"e110b86a-f193-448b-bd73-1babbc0b175b\" (UID: \"e110b86a-f193-448b-bd73-1babbc0b175b\") " Dec 03 11:00:02 crc kubenswrapper[4756]: I1203 11:00:02.623282 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e110b86a-f193-448b-bd73-1babbc0b175b-config-volume\") pod \"e110b86a-f193-448b-bd73-1babbc0b175b\" (UID: \"e110b86a-f193-448b-bd73-1babbc0b175b\") " Dec 03 11:00:02 crc kubenswrapper[4756]: I1203 11:00:02.623326 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dtlj\" (UniqueName: \"kubernetes.io/projected/e110b86a-f193-448b-bd73-1babbc0b175b-kube-api-access-8dtlj\") pod \"e110b86a-f193-448b-bd73-1babbc0b175b\" (UID: \"e110b86a-f193-448b-bd73-1babbc0b175b\") " Dec 03 11:00:02 crc kubenswrapper[4756]: I1203 11:00:02.624265 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e110b86a-f193-448b-bd73-1babbc0b175b-config-volume" (OuterVolumeSpecName: "config-volume") pod "e110b86a-f193-448b-bd73-1babbc0b175b" (UID: "e110b86a-f193-448b-bd73-1babbc0b175b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:00:02 crc kubenswrapper[4756]: I1203 11:00:02.629243 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e110b86a-f193-448b-bd73-1babbc0b175b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e110b86a-f193-448b-bd73-1babbc0b175b" (UID: "e110b86a-f193-448b-bd73-1babbc0b175b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:00:02 crc kubenswrapper[4756]: I1203 11:00:02.629265 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e110b86a-f193-448b-bd73-1babbc0b175b-kube-api-access-8dtlj" (OuterVolumeSpecName: "kube-api-access-8dtlj") pod "e110b86a-f193-448b-bd73-1babbc0b175b" (UID: "e110b86a-f193-448b-bd73-1babbc0b175b"). InnerVolumeSpecName "kube-api-access-8dtlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:00:02 crc kubenswrapper[4756]: I1203 11:00:02.724943 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e110b86a-f193-448b-bd73-1babbc0b175b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 11:00:02 crc kubenswrapper[4756]: I1203 11:00:02.725015 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e110b86a-f193-448b-bd73-1babbc0b175b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 11:00:02 crc kubenswrapper[4756]: I1203 11:00:02.725027 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dtlj\" (UniqueName: \"kubernetes.io/projected/e110b86a-f193-448b-bd73-1babbc0b175b-kube-api-access-8dtlj\") on node \"crc\" DevicePath \"\"" Dec 03 11:00:03 crc kubenswrapper[4756]: I1203 11:00:03.300665 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" event={"ID":"e110b86a-f193-448b-bd73-1babbc0b175b","Type":"ContainerDied","Data":"9022ee0b2700e557673c0953c3e420aa229458c448393c07ccc06f6c1750154a"} Dec 03 11:00:03 crc kubenswrapper[4756]: I1203 11:00:03.300715 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9022ee0b2700e557673c0953c3e420aa229458c448393c07ccc06f6c1750154a" Dec 03 11:00:03 crc kubenswrapper[4756]: I1203 11:00:03.300757 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff" Dec 03 11:00:22 crc kubenswrapper[4756]: I1203 11:00:22.607258 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:00:22 crc kubenswrapper[4756]: I1203 11:00:22.607828 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:00:22 crc kubenswrapper[4756]: I1203 11:00:22.607882 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 11:00:22 crc kubenswrapper[4756]: I1203 11:00:22.608570 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9ee7eef0aeb97ea95a86f3145a817bc46a4709e501b1caee8826aa807385845"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 11:00:22 crc kubenswrapper[4756]: I1203 11:00:22.608647 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://c9ee7eef0aeb97ea95a86f3145a817bc46a4709e501b1caee8826aa807385845" gracePeriod=600 Dec 03 11:00:23 crc kubenswrapper[4756]: I1203 11:00:23.460906 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="c9ee7eef0aeb97ea95a86f3145a817bc46a4709e501b1caee8826aa807385845" exitCode=0 Dec 03 11:00:23 crc kubenswrapper[4756]: I1203 11:00:23.461032 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"c9ee7eef0aeb97ea95a86f3145a817bc46a4709e501b1caee8826aa807385845"} Dec 03 11:00:23 crc kubenswrapper[4756]: I1203 11:00:23.461297 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"71b46a8d298e8651d77837df1ef004e304d9e2a98b64e7662b8846847a02f75c"} Dec 03 11:00:23 crc kubenswrapper[4756]: I1203 11:00:23.461329 4756 scope.go:117] "RemoveContainer" containerID="13e1d8ec0957322f12ab47e9d389db88df6c0f1b52b23a498ed9e256d65d6c0f" Dec 03 11:02:09 crc kubenswrapper[4756]: I1203 11:02:09.435613 4756 scope.go:117] "RemoveContainer" containerID="86590e4837a1a00770c1b148d83ae428cec8595caeeb08938267cb99ccc1fc08" Dec 03 11:02:22 crc kubenswrapper[4756]: I1203 11:02:22.608114 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:02:22 crc kubenswrapper[4756]: I1203 11:02:22.609063 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:02:52 crc kubenswrapper[4756]: I1203 11:02:52.608164 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:02:52 crc kubenswrapper[4756]: I1203 11:02:52.608811 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:03:22 crc kubenswrapper[4756]: I1203 11:03:22.607532 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:03:22 crc kubenswrapper[4756]: I1203 11:03:22.608544 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:03:22 crc kubenswrapper[4756]: I1203 11:03:22.608614 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 11:03:22 crc kubenswrapper[4756]: I1203 11:03:22.609409 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71b46a8d298e8651d77837df1ef004e304d9e2a98b64e7662b8846847a02f75c"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 11:03:22 crc kubenswrapper[4756]: I1203 11:03:22.609476 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://71b46a8d298e8651d77837df1ef004e304d9e2a98b64e7662b8846847a02f75c" gracePeriod=600 Dec 03 11:03:23 crc kubenswrapper[4756]: I1203 11:03:23.651553 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="71b46a8d298e8651d77837df1ef004e304d9e2a98b64e7662b8846847a02f75c" exitCode=0 Dec 03 11:03:23 crc kubenswrapper[4756]: I1203 11:03:23.651641 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"71b46a8d298e8651d77837df1ef004e304d9e2a98b64e7662b8846847a02f75c"} Dec 03 11:03:23 crc kubenswrapper[4756]: I1203 11:03:23.652462 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"01dfa0931fd4257f5b01935057514c563c7b4e22621a95eaed462238153a1e0f"} Dec 03 11:03:23 crc kubenswrapper[4756]: I1203 11:03:23.652488 4756 scope.go:117] "RemoveContainer" containerID="c9ee7eef0aeb97ea95a86f3145a817bc46a4709e501b1caee8826aa807385845" Dec 03 11:05:42 crc kubenswrapper[4756]: I1203 11:05:42.535135 4756 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 11:05:52 crc kubenswrapper[4756]: I1203 11:05:52.607739 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:05:52 crc kubenswrapper[4756]: I1203 11:05:52.608309 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.583292 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-lpmgs"] Dec 03 11:05:53 crc kubenswrapper[4756]: E1203 11:05:53.583699 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e110b86a-f193-448b-bd73-1babbc0b175b" containerName="collect-profiles" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.583730 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e110b86a-f193-448b-bd73-1babbc0b175b" containerName="collect-profiles" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.583883 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e110b86a-f193-448b-bd73-1babbc0b175b" containerName="collect-profiles" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.584532 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-lpmgs" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.588568 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-bxjtd"] Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.589549 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-bxjtd" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.591824 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.592210 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.592585 4756 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-l85df" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.592720 4756 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-n4f9k" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.596722 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-lpmgs"] Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.614910 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-bxjtd"] Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.640419 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-j2l27"] Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.641331 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-j2l27" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.643629 4756 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-s2mnh" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.649982 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-j2l27"] Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.678308 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s7xs\" (UniqueName: \"kubernetes.io/projected/15eea9f7-71bc-4b1d-810a-8dd3da3015f2-kube-api-access-9s7xs\") pod \"cert-manager-cainjector-7f985d654d-lpmgs\" (UID: \"15eea9f7-71bc-4b1d-810a-8dd3da3015f2\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-lpmgs" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.780197 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6g87\" (UniqueName: \"kubernetes.io/projected/23b87262-d9c4-45f6-8cc7-711f71e1a6c0-kube-api-access-x6g87\") pod \"cert-manager-webhook-5655c58dd6-j2l27\" (UID: \"23b87262-d9c4-45f6-8cc7-711f71e1a6c0\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-j2l27" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.780264 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67p5l\" (UniqueName: \"kubernetes.io/projected/6ab38bf9-5ba7-4205-82b7-30337cd2694f-kube-api-access-67p5l\") pod \"cert-manager-5b446d88c5-bxjtd\" (UID: \"6ab38bf9-5ba7-4205-82b7-30337cd2694f\") " pod="cert-manager/cert-manager-5b446d88c5-bxjtd" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.780583 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s7xs\" (UniqueName: \"kubernetes.io/projected/15eea9f7-71bc-4b1d-810a-8dd3da3015f2-kube-api-access-9s7xs\") pod \"cert-manager-cainjector-7f985d654d-lpmgs\" (UID: \"15eea9f7-71bc-4b1d-810a-8dd3da3015f2\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-lpmgs" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.803445 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s7xs\" (UniqueName: \"kubernetes.io/projected/15eea9f7-71bc-4b1d-810a-8dd3da3015f2-kube-api-access-9s7xs\") pod \"cert-manager-cainjector-7f985d654d-lpmgs\" (UID: \"15eea9f7-71bc-4b1d-810a-8dd3da3015f2\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-lpmgs" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.881891 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6g87\" (UniqueName: \"kubernetes.io/projected/23b87262-d9c4-45f6-8cc7-711f71e1a6c0-kube-api-access-x6g87\") pod \"cert-manager-webhook-5655c58dd6-j2l27\" (UID: \"23b87262-d9c4-45f6-8cc7-711f71e1a6c0\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-j2l27" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.881990 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67p5l\" (UniqueName: \"kubernetes.io/projected/6ab38bf9-5ba7-4205-82b7-30337cd2694f-kube-api-access-67p5l\") pod \"cert-manager-5b446d88c5-bxjtd\" (UID: \"6ab38bf9-5ba7-4205-82b7-30337cd2694f\") " pod="cert-manager/cert-manager-5b446d88c5-bxjtd" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.903095 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67p5l\" (UniqueName: \"kubernetes.io/projected/6ab38bf9-5ba7-4205-82b7-30337cd2694f-kube-api-access-67p5l\") pod \"cert-manager-5b446d88c5-bxjtd\" (UID: \"6ab38bf9-5ba7-4205-82b7-30337cd2694f\") " pod="cert-manager/cert-manager-5b446d88c5-bxjtd" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.907337 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-lpmgs" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.908686 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6g87\" (UniqueName: \"kubernetes.io/projected/23b87262-d9c4-45f6-8cc7-711f71e1a6c0-kube-api-access-x6g87\") pod \"cert-manager-webhook-5655c58dd6-j2l27\" (UID: \"23b87262-d9c4-45f6-8cc7-711f71e1a6c0\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-j2l27" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.919165 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-bxjtd" Dec 03 11:05:53 crc kubenswrapper[4756]: I1203 11:05:53.959047 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-j2l27" Dec 03 11:05:54 crc kubenswrapper[4756]: I1203 11:05:54.180839 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-bxjtd"] Dec 03 11:05:54 crc kubenswrapper[4756]: I1203 11:05:54.198635 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 11:05:54 crc kubenswrapper[4756]: I1203 11:05:54.202039 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-lpmgs"] Dec 03 11:05:54 crc kubenswrapper[4756]: I1203 11:05:54.452477 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-j2l27"] Dec 03 11:05:54 crc kubenswrapper[4756]: W1203 11:05:54.455425 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23b87262_d9c4_45f6_8cc7_711f71e1a6c0.slice/crio-5df56678260e8e4a09db20b5909e42caa109dab35b5f0dd9de4235bf74d9adbb WatchSource:0}: Error finding container 5df56678260e8e4a09db20b5909e42caa109dab35b5f0dd9de4235bf74d9adbb: Status 404 returned error can't find the container with id 5df56678260e8e4a09db20b5909e42caa109dab35b5f0dd9de4235bf74d9adbb Dec 03 11:05:54 crc kubenswrapper[4756]: I1203 11:05:54.688604 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-bxjtd" event={"ID":"6ab38bf9-5ba7-4205-82b7-30337cd2694f","Type":"ContainerStarted","Data":"7d76d76b88bc964023991556cc1949c109adff79260cb265ccea5effe20d523c"} Dec 03 11:05:54 crc kubenswrapper[4756]: I1203 11:05:54.689818 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-lpmgs" event={"ID":"15eea9f7-71bc-4b1d-810a-8dd3da3015f2","Type":"ContainerStarted","Data":"dda9030ace061f506d0f0d376f74804ca191c1f90db0b9d07003ee12577ba479"} Dec 03 11:05:54 crc kubenswrapper[4756]: I1203 11:05:54.690894 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-j2l27" event={"ID":"23b87262-d9c4-45f6-8cc7-711f71e1a6c0","Type":"ContainerStarted","Data":"5df56678260e8e4a09db20b5909e42caa109dab35b5f0dd9de4235bf74d9adbb"} Dec 03 11:05:56 crc kubenswrapper[4756]: I1203 11:05:56.705874 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-bxjtd" event={"ID":"6ab38bf9-5ba7-4205-82b7-30337cd2694f","Type":"ContainerStarted","Data":"008bca6048a3574e09bbfdad43f5ae33b84773abe9cb51b3e22869cb2ff8b214"} Dec 03 11:05:56 crc kubenswrapper[4756]: I1203 11:05:56.729559 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-bxjtd" podStartSLOduration=1.7293340929999999 podStartE2EDuration="3.729522636s" podCreationTimestamp="2025-12-03 11:05:53 +0000 UTC" firstStartedPulling="2025-12-03 11:05:54.198335604 +0000 UTC m=+765.228336848" lastFinishedPulling="2025-12-03 11:05:56.198524147 +0000 UTC m=+767.228525391" observedRunningTime="2025-12-03 11:05:56.723014131 +0000 UTC m=+767.753015425" watchObservedRunningTime="2025-12-03 11:05:56.729522636 +0000 UTC m=+767.759523890" Dec 03 11:05:58 crc kubenswrapper[4756]: I1203 11:05:58.720601 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-j2l27" event={"ID":"23b87262-d9c4-45f6-8cc7-711f71e1a6c0","Type":"ContainerStarted","Data":"367f53afab6ee3ffacd5c2fd31060a878c505008a64873a4195f9590445ff5d6"} Dec 03 11:05:58 crc kubenswrapper[4756]: I1203 11:05:58.721137 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-j2l27" Dec 03 11:05:58 crc kubenswrapper[4756]: I1203 11:05:58.722915 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-lpmgs" event={"ID":"15eea9f7-71bc-4b1d-810a-8dd3da3015f2","Type":"ContainerStarted","Data":"28dcce98557c8e6ad5a12cf3b0620a75f785e936b5be7deb8f5028060abc724f"} Dec 03 11:05:58 crc kubenswrapper[4756]: I1203 11:05:58.744609 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-j2l27" podStartSLOduration=2.5790877930000002 podStartE2EDuration="5.744575185s" podCreationTimestamp="2025-12-03 11:05:53 +0000 UTC" firstStartedPulling="2025-12-03 11:05:54.459012382 +0000 UTC m=+765.489013626" lastFinishedPulling="2025-12-03 11:05:57.624499774 +0000 UTC m=+768.654501018" observedRunningTime="2025-12-03 11:05:58.743671896 +0000 UTC m=+769.773673170" watchObservedRunningTime="2025-12-03 11:05:58.744575185 +0000 UTC m=+769.774576459" Dec 03 11:05:58 crc kubenswrapper[4756]: I1203 11:05:58.765302 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-lpmgs" podStartSLOduration=2.414115537 podStartE2EDuration="5.765269524s" podCreationTimestamp="2025-12-03 11:05:53 +0000 UTC" firstStartedPulling="2025-12-03 11:05:54.214109678 +0000 UTC m=+765.244110922" lastFinishedPulling="2025-12-03 11:05:57.565263655 +0000 UTC m=+768.595264909" observedRunningTime="2025-12-03 11:05:58.760170245 +0000 UTC m=+769.790171529" watchObservedRunningTime="2025-12-03 11:05:58.765269524 +0000 UTC m=+769.795270778" Dec 03 11:06:03 crc kubenswrapper[4756]: I1203 11:06:03.440845 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zqms7"] Dec 03 11:06:03 crc kubenswrapper[4756]: I1203 11:06:03.441801 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovn-controller" containerID="cri-o://9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790" gracePeriod=30 Dec 03 11:06:03 crc kubenswrapper[4756]: I1203 11:06:03.441970 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="northd" containerID="cri-o://1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc" gracePeriod=30 Dec 03 11:06:03 crc kubenswrapper[4756]: I1203 11:06:03.441895 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="nbdb" containerID="cri-o://d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d" gracePeriod=30 Dec 03 11:06:03 crc kubenswrapper[4756]: I1203 11:06:03.442047 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547" gracePeriod=30 Dec 03 11:06:03 crc kubenswrapper[4756]: I1203 11:06:03.442095 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="kube-rbac-proxy-node" containerID="cri-o://98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6" gracePeriod=30 Dec 03 11:06:03 crc kubenswrapper[4756]: I1203 11:06:03.442129 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovn-acl-logging" containerID="cri-o://0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581" gracePeriod=30 Dec 03 11:06:03 crc kubenswrapper[4756]: I1203 11:06:03.442525 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="sbdb" containerID="cri-o://ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce" gracePeriod=30 Dec 03 11:06:03 crc kubenswrapper[4756]: I1203 11:06:03.486215 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" containerID="cri-o://895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344" gracePeriod=30 Dec 03 11:06:03 crc kubenswrapper[4756]: I1203 11:06:03.962557 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-j2l27" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.175122 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/3.log" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.177759 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovn-acl-logging/0.log" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.178404 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovn-controller/0.log" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.179002 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239023 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8zh8g"] Dec 03 11:06:04 crc kubenswrapper[4756]: E1203 11:06:04.239322 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="sbdb" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239337 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="sbdb" Dec 03 11:06:04 crc kubenswrapper[4756]: E1203 11:06:04.239346 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="nbdb" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239355 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="nbdb" Dec 03 11:06:04 crc kubenswrapper[4756]: E1203 11:06:04.239363 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovn-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239369 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovn-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: E1203 11:06:04.239376 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239381 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: E1203 11:06:04.239390 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239396 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: E1203 11:06:04.239408 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239415 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: E1203 11:06:04.239427 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="kube-rbac-proxy-node" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239434 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="kube-rbac-proxy-node" Dec 03 11:06:04 crc kubenswrapper[4756]: E1203 11:06:04.239444 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239451 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: E1203 11:06:04.239462 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239469 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 11:06:04 crc kubenswrapper[4756]: E1203 11:06:04.239480 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="northd" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239486 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="northd" Dec 03 11:06:04 crc kubenswrapper[4756]: E1203 11:06:04.239497 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovn-acl-logging" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239502 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovn-acl-logging" Dec 03 11:06:04 crc kubenswrapper[4756]: E1203 11:06:04.239513 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="kubecfg-setup" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239520 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="kubecfg-setup" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239622 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239633 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239644 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="northd" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239652 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovn-acl-logging" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239687 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239696 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239703 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239711 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovn-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239720 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="kube-rbac-proxy-node" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239727 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="sbdb" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239739 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="nbdb" Dec 03 11:06:04 crc kubenswrapper[4756]: E1203 11:06:04.239848 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.239859 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.240004 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerName="ovnkube-controller" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.241824 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.356802 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-openvswitch\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.356924 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-etc-openvswitch\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.356982 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.356985 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.357061 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-cni-netd\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.357101 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.357134 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-run-ovn-kubernetes\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.357157 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.357295 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-run-netns\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.357328 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-var-lib-openvswitch\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.357363 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tktq2\" (UniqueName: \"kubernetes.io/projected/b16dcb4b-a5dd-4081-a569-7f5a024f673b-kube-api-access-tktq2\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.357389 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-env-overrides\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.357389 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.357413 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-cni-bin\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358002 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-log-socket\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358033 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovn-node-metrics-cert\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358067 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovnkube-config\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358097 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-slash\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.357440 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.357477 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.357898 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358121 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovnkube-script-lib\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358239 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-slash" (OuterVolumeSpecName: "host-slash") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358324 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-ovn\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358392 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-kubelet\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358395 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-log-socket" (OuterVolumeSpecName: "log-socket") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358443 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358398 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358466 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358485 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358502 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358506 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-node-log\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358547 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-systemd\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358571 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-systemd-units\") pod \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\" (UID: \"b16dcb4b-a5dd-4081-a569-7f5a024f673b\") " Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358569 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-node-log" (OuterVolumeSpecName: "node-log") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358659 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-log-socket\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358680 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358687 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0daaed60-f794-45c1-9081-30def9ea4fc9-ovnkube-script-lib\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358717 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358749 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0daaed60-f794-45c1-9081-30def9ea4fc9-env-overrides\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358771 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-slash\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358817 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-kubelet\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358855 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-cni-netd\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358884 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb9d6\" (UniqueName: \"kubernetes.io/projected/0daaed60-f794-45c1-9081-30def9ea4fc9-kube-api-access-hb9d6\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358934 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-run-ovn-kubernetes\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.358983 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0daaed60-f794-45c1-9081-30def9ea4fc9-ovnkube-config\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359009 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-node-log\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359061 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-cni-bin\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359083 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-systemd-units\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359123 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-run-openvswitch\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359147 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-etc-openvswitch\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359172 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0daaed60-f794-45c1-9081-30def9ea4fc9-ovn-node-metrics-cert\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359195 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-run-netns\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359215 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-run-ovn\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359256 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-var-lib-openvswitch\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359299 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-run-systemd\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359354 4756 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359366 4756 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-node-log\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359377 4756 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359388 4756 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359399 4756 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359422 4756 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359438 4756 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359451 4756 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359464 4756 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359475 4756 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359486 4756 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359497 4756 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-log-socket\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359506 4756 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-slash\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359517 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359528 4756 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359550 4756 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.359052 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.367192 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.368154 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b16dcb4b-a5dd-4081-a569-7f5a024f673b-kube-api-access-tktq2" (OuterVolumeSpecName: "kube-api-access-tktq2") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "kube-api-access-tktq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.381610 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b16dcb4b-a5dd-4081-a569-7f5a024f673b" (UID: "b16dcb4b-a5dd-4081-a569-7f5a024f673b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.460912 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0daaed60-f794-45c1-9081-30def9ea4fc9-env-overrides\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461009 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-slash\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461048 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-kubelet\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461103 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-cni-netd\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461140 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb9d6\" (UniqueName: \"kubernetes.io/projected/0daaed60-f794-45c1-9081-30def9ea4fc9-kube-api-access-hb9d6\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461148 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-slash\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461192 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-kubelet\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461180 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-run-ovn-kubernetes\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461299 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-cni-netd\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461374 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0daaed60-f794-45c1-9081-30def9ea4fc9-ovnkube-config\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461344 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-run-ovn-kubernetes\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461450 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-node-log\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461479 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-node-log\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461542 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-cni-bin\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461574 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-systemd-units\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461587 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-cni-bin\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461597 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-run-openvswitch\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461621 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-systemd-units\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461629 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-etc-openvswitch\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461654 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-run-openvswitch\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461661 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0daaed60-f794-45c1-9081-30def9ea4fc9-ovn-node-metrics-cert\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461682 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-run-netns\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461687 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-etc-openvswitch\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461700 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-run-ovn\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461708 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0daaed60-f794-45c1-9081-30def9ea4fc9-env-overrides\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461719 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-run-netns\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461752 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-run-ovn\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461862 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-var-lib-openvswitch\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461892 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-var-lib-openvswitch\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461915 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-run-systemd\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461937 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0daaed60-f794-45c1-9081-30def9ea4fc9-ovnkube-config\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.461946 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-run-systemd\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.462008 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-log-socket\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.462029 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0daaed60-f794-45c1-9081-30def9ea4fc9-ovnkube-script-lib\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.462063 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.462071 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-log-socket\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.462181 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tktq2\" (UniqueName: \"kubernetes.io/projected/b16dcb4b-a5dd-4081-a569-7f5a024f673b-kube-api-access-tktq2\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.462197 4756 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.462207 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b16dcb4b-a5dd-4081-a569-7f5a024f673b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.462217 4756 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b16dcb4b-a5dd-4081-a569-7f5a024f673b-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.462247 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0daaed60-f794-45c1-9081-30def9ea4fc9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.462874 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0daaed60-f794-45c1-9081-30def9ea4fc9-ovnkube-script-lib\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.465936 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0daaed60-f794-45c1-9081-30def9ea4fc9-ovn-node-metrics-cert\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.478062 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb9d6\" (UniqueName: \"kubernetes.io/projected/0daaed60-f794-45c1-9081-30def9ea4fc9-kube-api-access-hb9d6\") pod \"ovnkube-node-8zh8g\" (UID: \"0daaed60-f794-45c1-9081-30def9ea4fc9\") " pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.560068 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.754651 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" event={"ID":"0daaed60-f794-45c1-9081-30def9ea4fc9","Type":"ContainerStarted","Data":"1a7cd24dd3e5ee1e1a35294ce162781770628cdb8d6a5add65337268ac044f0f"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.755344 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" event={"ID":"0daaed60-f794-45c1-9081-30def9ea4fc9","Type":"ContainerStarted","Data":"fa648a661a01dde08a7a5ce650f73373949956690c8c56197e2f08fd887c419b"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.757428 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xwtn_d0dad5dd-86f8-4a8a-aed6-dd07123c5058/kube-multus/2.log" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.757851 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xwtn_d0dad5dd-86f8-4a8a-aed6-dd07123c5058/kube-multus/1.log" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.757898 4756 generic.go:334] "Generic (PLEG): container finished" podID="d0dad5dd-86f8-4a8a-aed6-dd07123c5058" containerID="0560b3efc6adc446014a14846aa8ab9b49f6c721761c97590e339b4729018ca1" exitCode=2 Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.757976 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xwtn" event={"ID":"d0dad5dd-86f8-4a8a-aed6-dd07123c5058","Type":"ContainerDied","Data":"0560b3efc6adc446014a14846aa8ab9b49f6c721761c97590e339b4729018ca1"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.758069 4756 scope.go:117] "RemoveContainer" containerID="315c8390aff55a4c17bf582d4a48938ac7bcf02baf8dc4007232c7ad76bb14b6" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.758814 4756 scope.go:117] "RemoveContainer" containerID="0560b3efc6adc446014a14846aa8ab9b49f6c721761c97590e339b4729018ca1" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.762973 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovnkube-controller/3.log" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.766631 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovn-acl-logging/0.log" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.767791 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zqms7_b16dcb4b-a5dd-4081-a569-7f5a024f673b/ovn-controller/0.log" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768518 4756 generic.go:334] "Generic (PLEG): container finished" podID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerID="895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344" exitCode=0 Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768554 4756 generic.go:334] "Generic (PLEG): container finished" podID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerID="ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce" exitCode=0 Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768566 4756 generic.go:334] "Generic (PLEG): container finished" podID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerID="d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d" exitCode=0 Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768576 4756 generic.go:334] "Generic (PLEG): container finished" podID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerID="1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc" exitCode=0 Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768583 4756 generic.go:334] "Generic (PLEG): container finished" podID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerID="f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547" exitCode=0 Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768591 4756 generic.go:334] "Generic (PLEG): container finished" podID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerID="98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6" exitCode=0 Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768599 4756 generic.go:334] "Generic (PLEG): container finished" podID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerID="0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581" exitCode=143 Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768607 4756 generic.go:334] "Generic (PLEG): container finished" podID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" containerID="9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790" exitCode=143 Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768631 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerDied","Data":"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768667 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerDied","Data":"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768680 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerDied","Data":"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768691 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerDied","Data":"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768701 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerDied","Data":"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768703 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768713 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerDied","Data":"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768813 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768826 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768831 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768837 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768842 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768847 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768852 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768857 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768862 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768867 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768876 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerDied","Data":"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768897 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768903 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768908 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768914 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768919 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768925 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768930 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768935 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768940 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768946 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768972 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerDied","Data":"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768982 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768988 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.768995 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769000 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769005 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769010 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769015 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769020 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769025 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769030 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769037 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zqms7" event={"ID":"b16dcb4b-a5dd-4081-a569-7f5a024f673b","Type":"ContainerDied","Data":"b31490848c54bc40b0a198254faf6c7d14461bb2c383edd54cc35e7e9401db79"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769046 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769051 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769057 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769062 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769068 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769179 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769190 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769195 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769201 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.769216 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509"} Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.862233 4756 scope.go:117] "RemoveContainer" containerID="895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.888031 4756 scope.go:117] "RemoveContainer" containerID="246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.904125 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zqms7"] Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.907688 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zqms7"] Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.917806 4756 scope.go:117] "RemoveContainer" containerID="ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.931765 4756 scope.go:117] "RemoveContainer" containerID="d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.945852 4756 scope.go:117] "RemoveContainer" containerID="1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.959071 4756 scope.go:117] "RemoveContainer" containerID="f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.974822 4756 scope.go:117] "RemoveContainer" containerID="98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6" Dec 03 11:06:04 crc kubenswrapper[4756]: I1203 11:06:04.991017 4756 scope.go:117] "RemoveContainer" containerID="0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.008376 4756 scope.go:117] "RemoveContainer" containerID="9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.024032 4756 scope.go:117] "RemoveContainer" containerID="64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.039021 4756 scope.go:117] "RemoveContainer" containerID="895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344" Dec 03 11:06:05 crc kubenswrapper[4756]: E1203 11:06:05.039493 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344\": container with ID starting with 895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344 not found: ID does not exist" containerID="895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.039550 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344"} err="failed to get container status \"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344\": rpc error: code = NotFound desc = could not find container \"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344\": container with ID starting with 895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.039633 4756 scope.go:117] "RemoveContainer" containerID="246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824" Dec 03 11:06:05 crc kubenswrapper[4756]: E1203 11:06:05.039994 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\": container with ID starting with 246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824 not found: ID does not exist" containerID="246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.040033 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824"} err="failed to get container status \"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\": rpc error: code = NotFound desc = could not find container \"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\": container with ID starting with 246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.040052 4756 scope.go:117] "RemoveContainer" containerID="ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce" Dec 03 11:06:05 crc kubenswrapper[4756]: E1203 11:06:05.040935 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\": container with ID starting with ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce not found: ID does not exist" containerID="ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.041020 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce"} err="failed to get container status \"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\": rpc error: code = NotFound desc = could not find container \"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\": container with ID starting with ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.041051 4756 scope.go:117] "RemoveContainer" containerID="d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d" Dec 03 11:06:05 crc kubenswrapper[4756]: E1203 11:06:05.041301 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\": container with ID starting with d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d not found: ID does not exist" containerID="d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.041322 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d"} err="failed to get container status \"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\": rpc error: code = NotFound desc = could not find container \"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\": container with ID starting with d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.041335 4756 scope.go:117] "RemoveContainer" containerID="1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc" Dec 03 11:06:05 crc kubenswrapper[4756]: E1203 11:06:05.041688 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\": container with ID starting with 1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc not found: ID does not exist" containerID="1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.041755 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc"} err="failed to get container status \"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\": rpc error: code = NotFound desc = could not find container \"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\": container with ID starting with 1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.041804 4756 scope.go:117] "RemoveContainer" containerID="f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547" Dec 03 11:06:05 crc kubenswrapper[4756]: E1203 11:06:05.042311 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\": container with ID starting with f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547 not found: ID does not exist" containerID="f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.042345 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547"} err="failed to get container status \"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\": rpc error: code = NotFound desc = could not find container \"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\": container with ID starting with f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.042362 4756 scope.go:117] "RemoveContainer" containerID="98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6" Dec 03 11:06:05 crc kubenswrapper[4756]: E1203 11:06:05.042653 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\": container with ID starting with 98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6 not found: ID does not exist" containerID="98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.042677 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6"} err="failed to get container status \"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\": rpc error: code = NotFound desc = could not find container \"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\": container with ID starting with 98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.042694 4756 scope.go:117] "RemoveContainer" containerID="0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581" Dec 03 11:06:05 crc kubenswrapper[4756]: E1203 11:06:05.042932 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\": container with ID starting with 0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581 not found: ID does not exist" containerID="0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.042994 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581"} err="failed to get container status \"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\": rpc error: code = NotFound desc = could not find container \"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\": container with ID starting with 0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.043010 4756 scope.go:117] "RemoveContainer" containerID="9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790" Dec 03 11:06:05 crc kubenswrapper[4756]: E1203 11:06:05.043362 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\": container with ID starting with 9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790 not found: ID does not exist" containerID="9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.043390 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790"} err="failed to get container status \"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\": rpc error: code = NotFound desc = could not find container \"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\": container with ID starting with 9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.043406 4756 scope.go:117] "RemoveContainer" containerID="64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509" Dec 03 11:06:05 crc kubenswrapper[4756]: E1203 11:06:05.043766 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\": container with ID starting with 64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509 not found: ID does not exist" containerID="64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.043793 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509"} err="failed to get container status \"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\": rpc error: code = NotFound desc = could not find container \"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\": container with ID starting with 64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.043807 4756 scope.go:117] "RemoveContainer" containerID="895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.044155 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344"} err="failed to get container status \"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344\": rpc error: code = NotFound desc = could not find container \"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344\": container with ID starting with 895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.044182 4756 scope.go:117] "RemoveContainer" containerID="246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.044508 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824"} err="failed to get container status \"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\": rpc error: code = NotFound desc = could not find container \"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\": container with ID starting with 246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.044531 4756 scope.go:117] "RemoveContainer" containerID="ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.044858 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce"} err="failed to get container status \"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\": rpc error: code = NotFound desc = could not find container \"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\": container with ID starting with ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.044877 4756 scope.go:117] "RemoveContainer" containerID="d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.045117 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d"} err="failed to get container status \"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\": rpc error: code = NotFound desc = could not find container \"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\": container with ID starting with d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.045138 4756 scope.go:117] "RemoveContainer" containerID="1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.045388 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc"} err="failed to get container status \"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\": rpc error: code = NotFound desc = could not find container \"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\": container with ID starting with 1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.045417 4756 scope.go:117] "RemoveContainer" containerID="f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.045731 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547"} err="failed to get container status \"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\": rpc error: code = NotFound desc = could not find container \"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\": container with ID starting with f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.045756 4756 scope.go:117] "RemoveContainer" containerID="98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.046048 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6"} err="failed to get container status \"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\": rpc error: code = NotFound desc = could not find container \"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\": container with ID starting with 98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.046071 4756 scope.go:117] "RemoveContainer" containerID="0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.046262 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581"} err="failed to get container status \"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\": rpc error: code = NotFound desc = could not find container \"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\": container with ID starting with 0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.046282 4756 scope.go:117] "RemoveContainer" containerID="9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.046460 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790"} err="failed to get container status \"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\": rpc error: code = NotFound desc = could not find container \"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\": container with ID starting with 9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.046481 4756 scope.go:117] "RemoveContainer" containerID="64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.046673 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509"} err="failed to get container status \"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\": rpc error: code = NotFound desc = could not find container \"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\": container with ID starting with 64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.046698 4756 scope.go:117] "RemoveContainer" containerID="895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.047511 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344"} err="failed to get container status \"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344\": rpc error: code = NotFound desc = could not find container \"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344\": container with ID starting with 895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.047529 4756 scope.go:117] "RemoveContainer" containerID="246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.048196 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824"} err="failed to get container status \"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\": rpc error: code = NotFound desc = could not find container \"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\": container with ID starting with 246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.048297 4756 scope.go:117] "RemoveContainer" containerID="ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.048583 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce"} err="failed to get container status \"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\": rpc error: code = NotFound desc = could not find container \"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\": container with ID starting with ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.048606 4756 scope.go:117] "RemoveContainer" containerID="d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.048831 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d"} err="failed to get container status \"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\": rpc error: code = NotFound desc = could not find container \"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\": container with ID starting with d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.048853 4756 scope.go:117] "RemoveContainer" containerID="1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.049169 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc"} err="failed to get container status \"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\": rpc error: code = NotFound desc = could not find container \"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\": container with ID starting with 1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.049220 4756 scope.go:117] "RemoveContainer" containerID="f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.049844 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547"} err="failed to get container status \"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\": rpc error: code = NotFound desc = could not find container \"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\": container with ID starting with f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.050042 4756 scope.go:117] "RemoveContainer" containerID="98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.050657 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6"} err="failed to get container status \"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\": rpc error: code = NotFound desc = could not find container \"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\": container with ID starting with 98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.050710 4756 scope.go:117] "RemoveContainer" containerID="0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.051182 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581"} err="failed to get container status \"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\": rpc error: code = NotFound desc = could not find container \"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\": container with ID starting with 0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.051204 4756 scope.go:117] "RemoveContainer" containerID="9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.051541 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790"} err="failed to get container status \"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\": rpc error: code = NotFound desc = could not find container \"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\": container with ID starting with 9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.051588 4756 scope.go:117] "RemoveContainer" containerID="64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.051881 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509"} err="failed to get container status \"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\": rpc error: code = NotFound desc = could not find container \"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\": container with ID starting with 64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.051911 4756 scope.go:117] "RemoveContainer" containerID="895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.052297 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344"} err="failed to get container status \"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344\": rpc error: code = NotFound desc = could not find container \"895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344\": container with ID starting with 895c734ae0620881b2f2dfcaeb31cf66e1e8a96f2744597a55ce437e7a0ca344 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.052325 4756 scope.go:117] "RemoveContainer" containerID="246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.052702 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824"} err="failed to get container status \"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\": rpc error: code = NotFound desc = could not find container \"246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824\": container with ID starting with 246050d04ac5bca622648cee94e788ccf6bd514ef42258e3b10dc88ca9d07824 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.052724 4756 scope.go:117] "RemoveContainer" containerID="ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.053070 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce"} err="failed to get container status \"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\": rpc error: code = NotFound desc = could not find container \"ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce\": container with ID starting with ed14307f02dca59ffdeabd8f37b8b5a93010cf371e95fe5b9ae67524c08306ce not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.053092 4756 scope.go:117] "RemoveContainer" containerID="d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.053385 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d"} err="failed to get container status \"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\": rpc error: code = NotFound desc = could not find container \"d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d\": container with ID starting with d82d26afe4e2aaedabdbd6ea38ba2d94710bb81b3c5631c8fe4bf18ec9c5404d not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.053405 4756 scope.go:117] "RemoveContainer" containerID="1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.053691 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc"} err="failed to get container status \"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\": rpc error: code = NotFound desc = could not find container \"1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc\": container with ID starting with 1ec0a4745721bcbbc97a26778a7c22bbd663def10642b66da343d442707afcfc not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.053710 4756 scope.go:117] "RemoveContainer" containerID="f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.054039 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547"} err="failed to get container status \"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\": rpc error: code = NotFound desc = could not find container \"f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547\": container with ID starting with f218fd5b2307eef827d6ac3752719451c9572ba367d9f8efad626a9adf7bc547 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.054059 4756 scope.go:117] "RemoveContainer" containerID="98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.054409 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6"} err="failed to get container status \"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\": rpc error: code = NotFound desc = could not find container \"98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6\": container with ID starting with 98386e9fbd3399e2da6349e0496f2733cb2f7c0824dac26c9c45b2be465d90b6 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.054430 4756 scope.go:117] "RemoveContainer" containerID="0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.054682 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581"} err="failed to get container status \"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\": rpc error: code = NotFound desc = could not find container \"0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581\": container with ID starting with 0f5a52135a4a0de4261b9c47afa5c306f5adfa9e313eb26c6ee5c43296de6581 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.054707 4756 scope.go:117] "RemoveContainer" containerID="9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.055046 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790"} err="failed to get container status \"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\": rpc error: code = NotFound desc = could not find container \"9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790\": container with ID starting with 9eebff58ba13fbfeac8497b23ccdfe4461a350088b46678622c71a84d996b790 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.055065 4756 scope.go:117] "RemoveContainer" containerID="64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.055256 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509"} err="failed to get container status \"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\": rpc error: code = NotFound desc = could not find container \"64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509\": container with ID starting with 64026774876b8392f7e55e90a5dde448d935ac21aeb45cac2959fa2162b2b509 not found: ID does not exist" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.240945 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b16dcb4b-a5dd-4081-a569-7f5a024f673b" path="/var/lib/kubelet/pods/b16dcb4b-a5dd-4081-a569-7f5a024f673b/volumes" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.778122 4756 generic.go:334] "Generic (PLEG): container finished" podID="0daaed60-f794-45c1-9081-30def9ea4fc9" containerID="1a7cd24dd3e5ee1e1a35294ce162781770628cdb8d6a5add65337268ac044f0f" exitCode=0 Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.778180 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" event={"ID":"0daaed60-f794-45c1-9081-30def9ea4fc9","Type":"ContainerDied","Data":"1a7cd24dd3e5ee1e1a35294ce162781770628cdb8d6a5add65337268ac044f0f"} Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.781083 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4xwtn_d0dad5dd-86f8-4a8a-aed6-dd07123c5058/kube-multus/2.log" Dec 03 11:06:05 crc kubenswrapper[4756]: I1203 11:06:05.781160 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4xwtn" event={"ID":"d0dad5dd-86f8-4a8a-aed6-dd07123c5058","Type":"ContainerStarted","Data":"44604d9f4596cc783e5d93f9121b07218a78459eec8102b3cb53a9518bbed5cb"} Dec 03 11:06:06 crc kubenswrapper[4756]: I1203 11:06:06.792942 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" event={"ID":"0daaed60-f794-45c1-9081-30def9ea4fc9","Type":"ContainerStarted","Data":"0b77c72041939e18a52e85fcd26f1e05e950a21a21e191819e1c5c457fe8611b"} Dec 03 11:06:06 crc kubenswrapper[4756]: I1203 11:06:06.793473 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" event={"ID":"0daaed60-f794-45c1-9081-30def9ea4fc9","Type":"ContainerStarted","Data":"3809d44b61e6a32cafbfbb8467a16f431f7999d39c5ae516dac4a28e62cc5a91"} Dec 03 11:06:06 crc kubenswrapper[4756]: I1203 11:06:06.793486 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" event={"ID":"0daaed60-f794-45c1-9081-30def9ea4fc9","Type":"ContainerStarted","Data":"41787178db8ef01b15369e978fb29ca37e917d9f72dea9703ec384496d6ce124"} Dec 03 11:06:06 crc kubenswrapper[4756]: I1203 11:06:06.793495 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" event={"ID":"0daaed60-f794-45c1-9081-30def9ea4fc9","Type":"ContainerStarted","Data":"0a8a1569b723bb06d7e01bde9f449c0a1bf82826ac27130786d9a5d3e082c028"} Dec 03 11:06:06 crc kubenswrapper[4756]: I1203 11:06:06.793513 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" event={"ID":"0daaed60-f794-45c1-9081-30def9ea4fc9","Type":"ContainerStarted","Data":"4b78b19434fc2005d6ff38d2d9eecbae8db7e25f679fc3cf9f6cb69fd60f80e4"} Dec 03 11:06:06 crc kubenswrapper[4756]: I1203 11:06:06.793521 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" event={"ID":"0daaed60-f794-45c1-9081-30def9ea4fc9","Type":"ContainerStarted","Data":"89ac81225113f8fb40c88139075a1b72e8be2a445db9698c152be486416f38f6"} Dec 03 11:06:08 crc kubenswrapper[4756]: I1203 11:06:08.807600 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" event={"ID":"0daaed60-f794-45c1-9081-30def9ea4fc9","Type":"ContainerStarted","Data":"977b16d11e736b874e143152655e287bfb5cf453e367c7aeb4c2e3f0764c45c2"} Dec 03 11:06:12 crc kubenswrapper[4756]: I1203 11:06:12.834242 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" event={"ID":"0daaed60-f794-45c1-9081-30def9ea4fc9","Type":"ContainerStarted","Data":"15bb07d9b069b802bc13bdf3251b36a4389fb64065d29eea5ba0ecc0a053c5ab"} Dec 03 11:06:12 crc kubenswrapper[4756]: I1203 11:06:12.835186 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:12 crc kubenswrapper[4756]: I1203 11:06:12.835206 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:12 crc kubenswrapper[4756]: I1203 11:06:12.835218 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:12 crc kubenswrapper[4756]: I1203 11:06:12.866503 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:12 crc kubenswrapper[4756]: I1203 11:06:12.875420 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:12 crc kubenswrapper[4756]: I1203 11:06:12.899272 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" podStartSLOduration=8.899242588 podStartE2EDuration="8.899242588s" podCreationTimestamp="2025-12-03 11:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:06:12.868864735 +0000 UTC m=+783.898865989" watchObservedRunningTime="2025-12-03 11:06:12.899242588 +0000 UTC m=+783.929243832" Dec 03 11:06:22 crc kubenswrapper[4756]: I1203 11:06:22.607730 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:06:22 crc kubenswrapper[4756]: I1203 11:06:22.608722 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.112307 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v7bbg"] Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.113602 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.140224 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v7bbg"] Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.177353 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrdpz\" (UniqueName: \"kubernetes.io/projected/20023a38-8a22-400e-898f-59b32ddcef2a-kube-api-access-zrdpz\") pod \"community-operators-v7bbg\" (UID: \"20023a38-8a22-400e-898f-59b32ddcef2a\") " pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.177418 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20023a38-8a22-400e-898f-59b32ddcef2a-utilities\") pod \"community-operators-v7bbg\" (UID: \"20023a38-8a22-400e-898f-59b32ddcef2a\") " pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.177656 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20023a38-8a22-400e-898f-59b32ddcef2a-catalog-content\") pod \"community-operators-v7bbg\" (UID: \"20023a38-8a22-400e-898f-59b32ddcef2a\") " pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.278545 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20023a38-8a22-400e-898f-59b32ddcef2a-catalog-content\") pod \"community-operators-v7bbg\" (UID: \"20023a38-8a22-400e-898f-59b32ddcef2a\") " pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.278624 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrdpz\" (UniqueName: \"kubernetes.io/projected/20023a38-8a22-400e-898f-59b32ddcef2a-kube-api-access-zrdpz\") pod \"community-operators-v7bbg\" (UID: \"20023a38-8a22-400e-898f-59b32ddcef2a\") " pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.278690 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20023a38-8a22-400e-898f-59b32ddcef2a-utilities\") pod \"community-operators-v7bbg\" (UID: \"20023a38-8a22-400e-898f-59b32ddcef2a\") " pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.279503 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20023a38-8a22-400e-898f-59b32ddcef2a-catalog-content\") pod \"community-operators-v7bbg\" (UID: \"20023a38-8a22-400e-898f-59b32ddcef2a\") " pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.279531 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20023a38-8a22-400e-898f-59b32ddcef2a-utilities\") pod \"community-operators-v7bbg\" (UID: \"20023a38-8a22-400e-898f-59b32ddcef2a\") " pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.300975 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrdpz\" (UniqueName: \"kubernetes.io/projected/20023a38-8a22-400e-898f-59b32ddcef2a-kube-api-access-zrdpz\") pod \"community-operators-v7bbg\" (UID: \"20023a38-8a22-400e-898f-59b32ddcef2a\") " pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.442677 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.737388 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v7bbg"] Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.925181 4756 generic.go:334] "Generic (PLEG): container finished" podID="20023a38-8a22-400e-898f-59b32ddcef2a" containerID="bc0725cf7ddb3f128936df5e932581f269479cb122f8c6bb0add57b19b31d6d0" exitCode=0 Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.925231 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7bbg" event={"ID":"20023a38-8a22-400e-898f-59b32ddcef2a","Type":"ContainerDied","Data":"bc0725cf7ddb3f128936df5e932581f269479cb122f8c6bb0add57b19b31d6d0"} Dec 03 11:06:27 crc kubenswrapper[4756]: I1203 11:06:27.925263 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7bbg" event={"ID":"20023a38-8a22-400e-898f-59b32ddcef2a","Type":"ContainerStarted","Data":"9571319e3d8f5c5f2191debd0f8e6717a21b84717e685113b4e15838ebd3b6b6"} Dec 03 11:06:28 crc kubenswrapper[4756]: I1203 11:06:28.935033 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7bbg" event={"ID":"20023a38-8a22-400e-898f-59b32ddcef2a","Type":"ContainerStarted","Data":"8e97490ead6444406c3de52ed8211a92e3ec1fb5e276329e2bb03216c8dedb42"} Dec 03 11:06:29 crc kubenswrapper[4756]: I1203 11:06:29.943810 4756 generic.go:334] "Generic (PLEG): container finished" podID="20023a38-8a22-400e-898f-59b32ddcef2a" containerID="8e97490ead6444406c3de52ed8211a92e3ec1fb5e276329e2bb03216c8dedb42" exitCode=0 Dec 03 11:06:29 crc kubenswrapper[4756]: I1203 11:06:29.943935 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7bbg" event={"ID":"20023a38-8a22-400e-898f-59b32ddcef2a","Type":"ContainerDied","Data":"8e97490ead6444406c3de52ed8211a92e3ec1fb5e276329e2bb03216c8dedb42"} Dec 03 11:06:30 crc kubenswrapper[4756]: I1203 11:06:30.954912 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7bbg" event={"ID":"20023a38-8a22-400e-898f-59b32ddcef2a","Type":"ContainerStarted","Data":"8ebf8e6b3837285ac0d1fc368ef94d419af580077ed4bd38a7e1e9128257d92b"} Dec 03 11:06:34 crc kubenswrapper[4756]: I1203 11:06:34.590744 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8zh8g" Dec 03 11:06:34 crc kubenswrapper[4756]: I1203 11:06:34.632143 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v7bbg" podStartSLOduration=5.188489948 podStartE2EDuration="7.632108764s" podCreationTimestamp="2025-12-03 11:06:27 +0000 UTC" firstStartedPulling="2025-12-03 11:06:27.927563047 +0000 UTC m=+798.957564291" lastFinishedPulling="2025-12-03 11:06:30.371181863 +0000 UTC m=+801.401183107" observedRunningTime="2025-12-03 11:06:30.979129307 +0000 UTC m=+802.009130571" watchObservedRunningTime="2025-12-03 11:06:34.632108764 +0000 UTC m=+805.662110018" Dec 03 11:06:37 crc kubenswrapper[4756]: I1203 11:06:37.444452 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:37 crc kubenswrapper[4756]: I1203 11:06:37.445048 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:37 crc kubenswrapper[4756]: I1203 11:06:37.490912 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:38 crc kubenswrapper[4756]: I1203 11:06:38.054293 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:38 crc kubenswrapper[4756]: I1203 11:06:38.115255 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v7bbg"] Dec 03 11:06:40 crc kubenswrapper[4756]: I1203 11:06:40.015289 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v7bbg" podUID="20023a38-8a22-400e-898f-59b32ddcef2a" containerName="registry-server" containerID="cri-o://8ebf8e6b3837285ac0d1fc368ef94d419af580077ed4bd38a7e1e9128257d92b" gracePeriod=2 Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.011048 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.022876 4756 generic.go:334] "Generic (PLEG): container finished" podID="20023a38-8a22-400e-898f-59b32ddcef2a" containerID="8ebf8e6b3837285ac0d1fc368ef94d419af580077ed4bd38a7e1e9128257d92b" exitCode=0 Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.022943 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7bbg" event={"ID":"20023a38-8a22-400e-898f-59b32ddcef2a","Type":"ContainerDied","Data":"8ebf8e6b3837285ac0d1fc368ef94d419af580077ed4bd38a7e1e9128257d92b"} Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.023005 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7bbg" event={"ID":"20023a38-8a22-400e-898f-59b32ddcef2a","Type":"ContainerDied","Data":"9571319e3d8f5c5f2191debd0f8e6717a21b84717e685113b4e15838ebd3b6b6"} Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.023030 4756 scope.go:117] "RemoveContainer" containerID="8ebf8e6b3837285ac0d1fc368ef94d419af580077ed4bd38a7e1e9128257d92b" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.023028 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7bbg" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.055156 4756 scope.go:117] "RemoveContainer" containerID="8e97490ead6444406c3de52ed8211a92e3ec1fb5e276329e2bb03216c8dedb42" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.073503 4756 scope.go:117] "RemoveContainer" containerID="bc0725cf7ddb3f128936df5e932581f269479cb122f8c6bb0add57b19b31d6d0" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.096536 4756 scope.go:117] "RemoveContainer" containerID="8ebf8e6b3837285ac0d1fc368ef94d419af580077ed4bd38a7e1e9128257d92b" Dec 03 11:06:41 crc kubenswrapper[4756]: E1203 11:06:41.097090 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ebf8e6b3837285ac0d1fc368ef94d419af580077ed4bd38a7e1e9128257d92b\": container with ID starting with 8ebf8e6b3837285ac0d1fc368ef94d419af580077ed4bd38a7e1e9128257d92b not found: ID does not exist" containerID="8ebf8e6b3837285ac0d1fc368ef94d419af580077ed4bd38a7e1e9128257d92b" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.097136 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ebf8e6b3837285ac0d1fc368ef94d419af580077ed4bd38a7e1e9128257d92b"} err="failed to get container status \"8ebf8e6b3837285ac0d1fc368ef94d419af580077ed4bd38a7e1e9128257d92b\": rpc error: code = NotFound desc = could not find container \"8ebf8e6b3837285ac0d1fc368ef94d419af580077ed4bd38a7e1e9128257d92b\": container with ID starting with 8ebf8e6b3837285ac0d1fc368ef94d419af580077ed4bd38a7e1e9128257d92b not found: ID does not exist" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.097165 4756 scope.go:117] "RemoveContainer" containerID="8e97490ead6444406c3de52ed8211a92e3ec1fb5e276329e2bb03216c8dedb42" Dec 03 11:06:41 crc kubenswrapper[4756]: E1203 11:06:41.097712 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e97490ead6444406c3de52ed8211a92e3ec1fb5e276329e2bb03216c8dedb42\": container with ID starting with 8e97490ead6444406c3de52ed8211a92e3ec1fb5e276329e2bb03216c8dedb42 not found: ID does not exist" containerID="8e97490ead6444406c3de52ed8211a92e3ec1fb5e276329e2bb03216c8dedb42" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.097777 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e97490ead6444406c3de52ed8211a92e3ec1fb5e276329e2bb03216c8dedb42"} err="failed to get container status \"8e97490ead6444406c3de52ed8211a92e3ec1fb5e276329e2bb03216c8dedb42\": rpc error: code = NotFound desc = could not find container \"8e97490ead6444406c3de52ed8211a92e3ec1fb5e276329e2bb03216c8dedb42\": container with ID starting with 8e97490ead6444406c3de52ed8211a92e3ec1fb5e276329e2bb03216c8dedb42 not found: ID does not exist" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.097815 4756 scope.go:117] "RemoveContainer" containerID="bc0725cf7ddb3f128936df5e932581f269479cb122f8c6bb0add57b19b31d6d0" Dec 03 11:06:41 crc kubenswrapper[4756]: E1203 11:06:41.098985 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc0725cf7ddb3f128936df5e932581f269479cb122f8c6bb0add57b19b31d6d0\": container with ID starting with bc0725cf7ddb3f128936df5e932581f269479cb122f8c6bb0add57b19b31d6d0 not found: ID does not exist" containerID="bc0725cf7ddb3f128936df5e932581f269479cb122f8c6bb0add57b19b31d6d0" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.099021 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc0725cf7ddb3f128936df5e932581f269479cb122f8c6bb0add57b19b31d6d0"} err="failed to get container status \"bc0725cf7ddb3f128936df5e932581f269479cb122f8c6bb0add57b19b31d6d0\": rpc error: code = NotFound desc = could not find container \"bc0725cf7ddb3f128936df5e932581f269479cb122f8c6bb0add57b19b31d6d0\": container with ID starting with bc0725cf7ddb3f128936df5e932581f269479cb122f8c6bb0add57b19b31d6d0 not found: ID does not exist" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.184009 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrdpz\" (UniqueName: \"kubernetes.io/projected/20023a38-8a22-400e-898f-59b32ddcef2a-kube-api-access-zrdpz\") pod \"20023a38-8a22-400e-898f-59b32ddcef2a\" (UID: \"20023a38-8a22-400e-898f-59b32ddcef2a\") " Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.184167 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20023a38-8a22-400e-898f-59b32ddcef2a-catalog-content\") pod \"20023a38-8a22-400e-898f-59b32ddcef2a\" (UID: \"20023a38-8a22-400e-898f-59b32ddcef2a\") " Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.184386 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20023a38-8a22-400e-898f-59b32ddcef2a-utilities\") pod \"20023a38-8a22-400e-898f-59b32ddcef2a\" (UID: \"20023a38-8a22-400e-898f-59b32ddcef2a\") " Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.185458 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20023a38-8a22-400e-898f-59b32ddcef2a-utilities" (OuterVolumeSpecName: "utilities") pod "20023a38-8a22-400e-898f-59b32ddcef2a" (UID: "20023a38-8a22-400e-898f-59b32ddcef2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.195118 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20023a38-8a22-400e-898f-59b32ddcef2a-kube-api-access-zrdpz" (OuterVolumeSpecName: "kube-api-access-zrdpz") pod "20023a38-8a22-400e-898f-59b32ddcef2a" (UID: "20023a38-8a22-400e-898f-59b32ddcef2a"). InnerVolumeSpecName "kube-api-access-zrdpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.255331 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20023a38-8a22-400e-898f-59b32ddcef2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20023a38-8a22-400e-898f-59b32ddcef2a" (UID: "20023a38-8a22-400e-898f-59b32ddcef2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.285885 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20023a38-8a22-400e-898f-59b32ddcef2a-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.285978 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrdpz\" (UniqueName: \"kubernetes.io/projected/20023a38-8a22-400e-898f-59b32ddcef2a-kube-api-access-zrdpz\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.285991 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20023a38-8a22-400e-898f-59b32ddcef2a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.358021 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v7bbg"] Dec 03 11:06:41 crc kubenswrapper[4756]: I1203 11:06:41.362038 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v7bbg"] Dec 03 11:06:43 crc kubenswrapper[4756]: I1203 11:06:43.242035 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20023a38-8a22-400e-898f-59b32ddcef2a" path="/var/lib/kubelet/pods/20023a38-8a22-400e-898f-59b32ddcef2a/volumes" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.336656 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb"] Dec 03 11:06:48 crc kubenswrapper[4756]: E1203 11:06:48.337388 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20023a38-8a22-400e-898f-59b32ddcef2a" containerName="registry-server" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.337405 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="20023a38-8a22-400e-898f-59b32ddcef2a" containerName="registry-server" Dec 03 11:06:48 crc kubenswrapper[4756]: E1203 11:06:48.337429 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20023a38-8a22-400e-898f-59b32ddcef2a" containerName="extract-utilities" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.337436 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="20023a38-8a22-400e-898f-59b32ddcef2a" containerName="extract-utilities" Dec 03 11:06:48 crc kubenswrapper[4756]: E1203 11:06:48.337449 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20023a38-8a22-400e-898f-59b32ddcef2a" containerName="extract-content" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.337456 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="20023a38-8a22-400e-898f-59b32ddcef2a" containerName="extract-content" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.337564 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="20023a38-8a22-400e-898f-59b32ddcef2a" containerName="registry-server" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.338395 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.341144 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.353112 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb"] Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.393334 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqxbk\" (UniqueName: \"kubernetes.io/projected/660ad08f-eaf7-484d-a630-05b51ac13d57-kube-api-access-zqxbk\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb\" (UID: \"660ad08f-eaf7-484d-a630-05b51ac13d57\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.393883 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/660ad08f-eaf7-484d-a630-05b51ac13d57-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb\" (UID: \"660ad08f-eaf7-484d-a630-05b51ac13d57\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.393943 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/660ad08f-eaf7-484d-a630-05b51ac13d57-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb\" (UID: \"660ad08f-eaf7-484d-a630-05b51ac13d57\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.495047 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqxbk\" (UniqueName: \"kubernetes.io/projected/660ad08f-eaf7-484d-a630-05b51ac13d57-kube-api-access-zqxbk\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb\" (UID: \"660ad08f-eaf7-484d-a630-05b51ac13d57\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.495105 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/660ad08f-eaf7-484d-a630-05b51ac13d57-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb\" (UID: \"660ad08f-eaf7-484d-a630-05b51ac13d57\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.495132 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/660ad08f-eaf7-484d-a630-05b51ac13d57-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb\" (UID: \"660ad08f-eaf7-484d-a630-05b51ac13d57\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.495611 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/660ad08f-eaf7-484d-a630-05b51ac13d57-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb\" (UID: \"660ad08f-eaf7-484d-a630-05b51ac13d57\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.495807 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/660ad08f-eaf7-484d-a630-05b51ac13d57-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb\" (UID: \"660ad08f-eaf7-484d-a630-05b51ac13d57\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.518556 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqxbk\" (UniqueName: \"kubernetes.io/projected/660ad08f-eaf7-484d-a630-05b51ac13d57-kube-api-access-zqxbk\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb\" (UID: \"660ad08f-eaf7-484d-a630-05b51ac13d57\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.659680 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" Dec 03 11:06:48 crc kubenswrapper[4756]: I1203 11:06:48.864458 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb"] Dec 03 11:06:49 crc kubenswrapper[4756]: I1203 11:06:49.072103 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" event={"ID":"660ad08f-eaf7-484d-a630-05b51ac13d57","Type":"ContainerStarted","Data":"122c397c6ae0265889ac8ec4d455cf24f0a69561e5c9a5130c43d0ee3a08a537"} Dec 03 11:06:49 crc kubenswrapper[4756]: I1203 11:06:49.072606 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" event={"ID":"660ad08f-eaf7-484d-a630-05b51ac13d57","Type":"ContainerStarted","Data":"34aba2b07b23437731af88fb94dd4e9e079a8a2849f51d3144d3fe0ea00ad76a"} Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.078909 4756 generic.go:334] "Generic (PLEG): container finished" podID="660ad08f-eaf7-484d-a630-05b51ac13d57" containerID="122c397c6ae0265889ac8ec4d455cf24f0a69561e5c9a5130c43d0ee3a08a537" exitCode=0 Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.078990 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" event={"ID":"660ad08f-eaf7-484d-a630-05b51ac13d57","Type":"ContainerDied","Data":"122c397c6ae0265889ac8ec4d455cf24f0a69561e5c9a5130c43d0ee3a08a537"} Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.331069 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bgqw9"] Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.332633 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.339187 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bgqw9"] Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.420690 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf7b234-118f-4939-ade2-e27c86f1d45e-utilities\") pod \"redhat-operators-bgqw9\" (UID: \"adf7b234-118f-4939-ade2-e27c86f1d45e\") " pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.420740 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js622\" (UniqueName: \"kubernetes.io/projected/adf7b234-118f-4939-ade2-e27c86f1d45e-kube-api-access-js622\") pod \"redhat-operators-bgqw9\" (UID: \"adf7b234-118f-4939-ade2-e27c86f1d45e\") " pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.420786 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf7b234-118f-4939-ade2-e27c86f1d45e-catalog-content\") pod \"redhat-operators-bgqw9\" (UID: \"adf7b234-118f-4939-ade2-e27c86f1d45e\") " pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.522755 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf7b234-118f-4939-ade2-e27c86f1d45e-utilities\") pod \"redhat-operators-bgqw9\" (UID: \"adf7b234-118f-4939-ade2-e27c86f1d45e\") " pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.522812 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js622\" (UniqueName: \"kubernetes.io/projected/adf7b234-118f-4939-ade2-e27c86f1d45e-kube-api-access-js622\") pod \"redhat-operators-bgqw9\" (UID: \"adf7b234-118f-4939-ade2-e27c86f1d45e\") " pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.522867 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf7b234-118f-4939-ade2-e27c86f1d45e-catalog-content\") pod \"redhat-operators-bgqw9\" (UID: \"adf7b234-118f-4939-ade2-e27c86f1d45e\") " pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.523618 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf7b234-118f-4939-ade2-e27c86f1d45e-catalog-content\") pod \"redhat-operators-bgqw9\" (UID: \"adf7b234-118f-4939-ade2-e27c86f1d45e\") " pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.523697 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf7b234-118f-4939-ade2-e27c86f1d45e-utilities\") pod \"redhat-operators-bgqw9\" (UID: \"adf7b234-118f-4939-ade2-e27c86f1d45e\") " pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.549515 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js622\" (UniqueName: \"kubernetes.io/projected/adf7b234-118f-4939-ade2-e27c86f1d45e-kube-api-access-js622\") pod \"redhat-operators-bgqw9\" (UID: \"adf7b234-118f-4939-ade2-e27c86f1d45e\") " pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.658129 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:06:50 crc kubenswrapper[4756]: I1203 11:06:50.939439 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bgqw9"] Dec 03 11:06:50 crc kubenswrapper[4756]: W1203 11:06:50.949232 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadf7b234_118f_4939_ade2_e27c86f1d45e.slice/crio-5cb474432ffeea5e53b8f7cb82ea8a247a62aee2a7992f209944b5766745afa6 WatchSource:0}: Error finding container 5cb474432ffeea5e53b8f7cb82ea8a247a62aee2a7992f209944b5766745afa6: Status 404 returned error can't find the container with id 5cb474432ffeea5e53b8f7cb82ea8a247a62aee2a7992f209944b5766745afa6 Dec 03 11:06:51 crc kubenswrapper[4756]: I1203 11:06:51.094572 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgqw9" event={"ID":"adf7b234-118f-4939-ade2-e27c86f1d45e","Type":"ContainerStarted","Data":"5cb474432ffeea5e53b8f7cb82ea8a247a62aee2a7992f209944b5766745afa6"} Dec 03 11:06:52 crc kubenswrapper[4756]: I1203 11:06:52.103159 4756 generic.go:334] "Generic (PLEG): container finished" podID="660ad08f-eaf7-484d-a630-05b51ac13d57" containerID="837325a43752a2ba9538be90765901bb2f3b1382d2801c200d1df250f3d8366e" exitCode=0 Dec 03 11:06:52 crc kubenswrapper[4756]: I1203 11:06:52.103398 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" event={"ID":"660ad08f-eaf7-484d-a630-05b51ac13d57","Type":"ContainerDied","Data":"837325a43752a2ba9538be90765901bb2f3b1382d2801c200d1df250f3d8366e"} Dec 03 11:06:52 crc kubenswrapper[4756]: I1203 11:06:52.105360 4756 generic.go:334] "Generic (PLEG): container finished" podID="adf7b234-118f-4939-ade2-e27c86f1d45e" containerID="27d41fa26397085820d9dcfc5a02436ddacff2a513f4631fbb87e0bb45d0990e" exitCode=0 Dec 03 11:06:52 crc kubenswrapper[4756]: I1203 11:06:52.105422 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgqw9" event={"ID":"adf7b234-118f-4939-ade2-e27c86f1d45e","Type":"ContainerDied","Data":"27d41fa26397085820d9dcfc5a02436ddacff2a513f4631fbb87e0bb45d0990e"} Dec 03 11:06:52 crc kubenswrapper[4756]: I1203 11:06:52.607803 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:06:52 crc kubenswrapper[4756]: I1203 11:06:52.607921 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:06:52 crc kubenswrapper[4756]: I1203 11:06:52.608115 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 11:06:52 crc kubenswrapper[4756]: I1203 11:06:52.609028 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"01dfa0931fd4257f5b01935057514c563c7b4e22621a95eaed462238153a1e0f"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 11:06:52 crc kubenswrapper[4756]: I1203 11:06:52.609097 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://01dfa0931fd4257f5b01935057514c563c7b4e22621a95eaed462238153a1e0f" gracePeriod=600 Dec 03 11:06:53 crc kubenswrapper[4756]: I1203 11:06:53.113372 4756 generic.go:334] "Generic (PLEG): container finished" podID="660ad08f-eaf7-484d-a630-05b51ac13d57" containerID="071236b9afbefa042fdbbd26d20b913c11e8c31c3f2b3763bdbddf9684d5e6f0" exitCode=0 Dec 03 11:06:53 crc kubenswrapper[4756]: I1203 11:06:53.113992 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" event={"ID":"660ad08f-eaf7-484d-a630-05b51ac13d57","Type":"ContainerDied","Data":"071236b9afbefa042fdbbd26d20b913c11e8c31c3f2b3763bdbddf9684d5e6f0"} Dec 03 11:06:53 crc kubenswrapper[4756]: I1203 11:06:53.117131 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgqw9" event={"ID":"adf7b234-118f-4939-ade2-e27c86f1d45e","Type":"ContainerStarted","Data":"9d1540321eab87165ece4ddbc925d86cad1243a9916cfaaf2d3a3ce47f566dc9"} Dec 03 11:06:53 crc kubenswrapper[4756]: I1203 11:06:53.120092 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="01dfa0931fd4257f5b01935057514c563c7b4e22621a95eaed462238153a1e0f" exitCode=0 Dec 03 11:06:53 crc kubenswrapper[4756]: I1203 11:06:53.120143 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"01dfa0931fd4257f5b01935057514c563c7b4e22621a95eaed462238153a1e0f"} Dec 03 11:06:53 crc kubenswrapper[4756]: I1203 11:06:53.120167 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"09868558298ce0e0fe5ddedbb9f992422ad279637132aad7bc1ac485611cc892"} Dec 03 11:06:53 crc kubenswrapper[4756]: I1203 11:06:53.120196 4756 scope.go:117] "RemoveContainer" containerID="71b46a8d298e8651d77837df1ef004e304d9e2a98b64e7662b8846847a02f75c" Dec 03 11:06:54 crc kubenswrapper[4756]: I1203 11:06:54.130725 4756 generic.go:334] "Generic (PLEG): container finished" podID="adf7b234-118f-4939-ade2-e27c86f1d45e" containerID="9d1540321eab87165ece4ddbc925d86cad1243a9916cfaaf2d3a3ce47f566dc9" exitCode=0 Dec 03 11:06:54 crc kubenswrapper[4756]: I1203 11:06:54.130781 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgqw9" event={"ID":"adf7b234-118f-4939-ade2-e27c86f1d45e","Type":"ContainerDied","Data":"9d1540321eab87165ece4ddbc925d86cad1243a9916cfaaf2d3a3ce47f566dc9"} Dec 03 11:06:54 crc kubenswrapper[4756]: I1203 11:06:54.453966 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" Dec 03 11:06:54 crc kubenswrapper[4756]: I1203 11:06:54.578263 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/660ad08f-eaf7-484d-a630-05b51ac13d57-bundle\") pod \"660ad08f-eaf7-484d-a630-05b51ac13d57\" (UID: \"660ad08f-eaf7-484d-a630-05b51ac13d57\") " Dec 03 11:06:54 crc kubenswrapper[4756]: I1203 11:06:54.578359 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/660ad08f-eaf7-484d-a630-05b51ac13d57-util\") pod \"660ad08f-eaf7-484d-a630-05b51ac13d57\" (UID: \"660ad08f-eaf7-484d-a630-05b51ac13d57\") " Dec 03 11:06:54 crc kubenswrapper[4756]: I1203 11:06:54.578381 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqxbk\" (UniqueName: \"kubernetes.io/projected/660ad08f-eaf7-484d-a630-05b51ac13d57-kube-api-access-zqxbk\") pod \"660ad08f-eaf7-484d-a630-05b51ac13d57\" (UID: \"660ad08f-eaf7-484d-a630-05b51ac13d57\") " Dec 03 11:06:54 crc kubenswrapper[4756]: I1203 11:06:54.579619 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/660ad08f-eaf7-484d-a630-05b51ac13d57-bundle" (OuterVolumeSpecName: "bundle") pod "660ad08f-eaf7-484d-a630-05b51ac13d57" (UID: "660ad08f-eaf7-484d-a630-05b51ac13d57"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:06:54 crc kubenswrapper[4756]: I1203 11:06:54.584055 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/660ad08f-eaf7-484d-a630-05b51ac13d57-kube-api-access-zqxbk" (OuterVolumeSpecName: "kube-api-access-zqxbk") pod "660ad08f-eaf7-484d-a630-05b51ac13d57" (UID: "660ad08f-eaf7-484d-a630-05b51ac13d57"). InnerVolumeSpecName "kube-api-access-zqxbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:06:54 crc kubenswrapper[4756]: I1203 11:06:54.679738 4756 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/660ad08f-eaf7-484d-a630-05b51ac13d57-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:54 crc kubenswrapper[4756]: I1203 11:06:54.679778 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqxbk\" (UniqueName: \"kubernetes.io/projected/660ad08f-eaf7-484d-a630-05b51ac13d57-kube-api-access-zqxbk\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:54 crc kubenswrapper[4756]: I1203 11:06:54.962646 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/660ad08f-eaf7-484d-a630-05b51ac13d57-util" (OuterVolumeSpecName: "util") pod "660ad08f-eaf7-484d-a630-05b51ac13d57" (UID: "660ad08f-eaf7-484d-a630-05b51ac13d57"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:06:54 crc kubenswrapper[4756]: I1203 11:06:54.985465 4756 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/660ad08f-eaf7-484d-a630-05b51ac13d57-util\") on node \"crc\" DevicePath \"\"" Dec 03 11:06:55 crc kubenswrapper[4756]: I1203 11:06:55.142986 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" event={"ID":"660ad08f-eaf7-484d-a630-05b51ac13d57","Type":"ContainerDied","Data":"34aba2b07b23437731af88fb94dd4e9e079a8a2849f51d3144d3fe0ea00ad76a"} Dec 03 11:06:55 crc kubenswrapper[4756]: I1203 11:06:55.143656 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34aba2b07b23437731af88fb94dd4e9e079a8a2849f51d3144d3fe0ea00ad76a" Dec 03 11:06:55 crc kubenswrapper[4756]: I1203 11:06:55.143049 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb" Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.066754 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-49m5x"] Dec 03 11:06:58 crc kubenswrapper[4756]: E1203 11:06:58.067923 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660ad08f-eaf7-484d-a630-05b51ac13d57" containerName="util" Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.067940 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="660ad08f-eaf7-484d-a630-05b51ac13d57" containerName="util" Dec 03 11:06:58 crc kubenswrapper[4756]: E1203 11:06:58.068012 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660ad08f-eaf7-484d-a630-05b51ac13d57" containerName="pull" Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.068022 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="660ad08f-eaf7-484d-a630-05b51ac13d57" containerName="pull" Dec 03 11:06:58 crc kubenswrapper[4756]: E1203 11:06:58.068034 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660ad08f-eaf7-484d-a630-05b51ac13d57" containerName="extract" Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.068042 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="660ad08f-eaf7-484d-a630-05b51ac13d57" containerName="extract" Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.068148 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="660ad08f-eaf7-484d-a630-05b51ac13d57" containerName="extract" Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.068676 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-49m5x" Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.077851 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.078573 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.080744 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-s87bq" Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.081429 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-49m5x"] Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.246055 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9n7s\" (UniqueName: \"kubernetes.io/projected/3ff12a67-01d3-4dcd-9528-4625113befa2-kube-api-access-d9n7s\") pod \"nmstate-operator-5b5b58f5c8-49m5x\" (UID: \"3ff12a67-01d3-4dcd-9528-4625113befa2\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-49m5x" Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.347985 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9n7s\" (UniqueName: \"kubernetes.io/projected/3ff12a67-01d3-4dcd-9528-4625113befa2-kube-api-access-d9n7s\") pod \"nmstate-operator-5b5b58f5c8-49m5x\" (UID: \"3ff12a67-01d3-4dcd-9528-4625113befa2\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-49m5x" Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.371217 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9n7s\" (UniqueName: \"kubernetes.io/projected/3ff12a67-01d3-4dcd-9528-4625113befa2-kube-api-access-d9n7s\") pod \"nmstate-operator-5b5b58f5c8-49m5x\" (UID: \"3ff12a67-01d3-4dcd-9528-4625113befa2\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-49m5x" Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.386867 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-49m5x" Dec 03 11:06:58 crc kubenswrapper[4756]: I1203 11:06:58.657873 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-49m5x"] Dec 03 11:06:59 crc kubenswrapper[4756]: I1203 11:06:59.169990 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-49m5x" event={"ID":"3ff12a67-01d3-4dcd-9528-4625113befa2","Type":"ContainerStarted","Data":"8f3bdf689e7fe0625e340224ec27a31d1849ed534b59dbb4bd5a8275636cc523"} Dec 03 11:06:59 crc kubenswrapper[4756]: I1203 11:06:59.174522 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgqw9" event={"ID":"adf7b234-118f-4939-ade2-e27c86f1d45e","Type":"ContainerStarted","Data":"57c14946dae2046ea20e5753177c2b2fd8bd6ee7bb639a115823c2768ce77006"} Dec 03 11:06:59 crc kubenswrapper[4756]: I1203 11:06:59.198970 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bgqw9" podStartSLOduration=3.5719430709999997 podStartE2EDuration="9.198926468s" podCreationTimestamp="2025-12-03 11:06:50 +0000 UTC" firstStartedPulling="2025-12-03 11:06:52.107731094 +0000 UTC m=+823.137732338" lastFinishedPulling="2025-12-03 11:06:57.734714491 +0000 UTC m=+828.764715735" observedRunningTime="2025-12-03 11:06:59.195588333 +0000 UTC m=+830.225589597" watchObservedRunningTime="2025-12-03 11:06:59.198926468 +0000 UTC m=+830.228927712" Dec 03 11:07:00 crc kubenswrapper[4756]: I1203 11:07:00.659934 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:07:00 crc kubenswrapper[4756]: I1203 11:07:00.660375 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:07:01 crc kubenswrapper[4756]: I1203 11:07:01.709248 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bgqw9" podUID="adf7b234-118f-4939-ade2-e27c86f1d45e" containerName="registry-server" probeResult="failure" output=< Dec 03 11:07:01 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Dec 03 11:07:01 crc kubenswrapper[4756]: > Dec 03 11:07:09 crc kubenswrapper[4756]: I1203 11:07:09.308587 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-49m5x" event={"ID":"3ff12a67-01d3-4dcd-9528-4625113befa2","Type":"ContainerStarted","Data":"0b1fc2ad8bb69d0283603a6c4dd65eecfd7863620af453602ba43cfa2a56174b"} Dec 03 11:07:09 crc kubenswrapper[4756]: I1203 11:07:09.328319 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-49m5x" podStartSLOduration=2.381351205 podStartE2EDuration="11.328294761s" podCreationTimestamp="2025-12-03 11:06:58 +0000 UTC" firstStartedPulling="2025-12-03 11:06:58.676601971 +0000 UTC m=+829.706603205" lastFinishedPulling="2025-12-03 11:07:07.623545487 +0000 UTC m=+838.653546761" observedRunningTime="2025-12-03 11:07:09.325841844 +0000 UTC m=+840.355843098" watchObservedRunningTime="2025-12-03 11:07:09.328294761 +0000 UTC m=+840.358296035" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.411176 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc"] Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.412420 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.417103 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gxj7d" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.418683 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.428566 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-znss6"] Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.439373 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-znss6" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.445260 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc"] Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.455725 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-znss6"] Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.469276 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8f7l7"] Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.470370 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.525355 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5e29e32a-6823-448d-9af0-1b4aa213a0d2-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-ztzcc\" (UID: \"5e29e32a-6823-448d-9af0-1b4aa213a0d2\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.525434 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xsv9\" (UniqueName: \"kubernetes.io/projected/5e29e32a-6823-448d-9af0-1b4aa213a0d2-kube-api-access-5xsv9\") pod \"nmstate-webhook-5f6d4c5ccb-ztzcc\" (UID: \"5e29e32a-6823-448d-9af0-1b4aa213a0d2\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.579429 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w"] Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.580523 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.583703 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rqnd6" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.583971 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.586598 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.598215 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w"] Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.627479 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4497\" (UniqueName: \"kubernetes.io/projected/f3c59b4b-7ffa-46bb-a92c-d8ae4218335d-kube-api-access-z4497\") pod \"nmstate-metrics-7f946cbc9-znss6\" (UID: \"f3c59b4b-7ffa-46bb-a92c-d8ae4218335d\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-znss6" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.627578 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5e29e32a-6823-448d-9af0-1b4aa213a0d2-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-ztzcc\" (UID: \"5e29e32a-6823-448d-9af0-1b4aa213a0d2\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.627690 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/379d2dc0-2b74-4c9e-936e-160b41e74098-ovs-socket\") pod \"nmstate-handler-8f7l7\" (UID: \"379d2dc0-2b74-4c9e-936e-160b41e74098\") " pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.627745 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/379d2dc0-2b74-4c9e-936e-160b41e74098-nmstate-lock\") pod \"nmstate-handler-8f7l7\" (UID: \"379d2dc0-2b74-4c9e-936e-160b41e74098\") " pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.627780 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xsv9\" (UniqueName: \"kubernetes.io/projected/5e29e32a-6823-448d-9af0-1b4aa213a0d2-kube-api-access-5xsv9\") pod \"nmstate-webhook-5f6d4c5ccb-ztzcc\" (UID: \"5e29e32a-6823-448d-9af0-1b4aa213a0d2\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.627822 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/379d2dc0-2b74-4c9e-936e-160b41e74098-dbus-socket\") pod \"nmstate-handler-8f7l7\" (UID: \"379d2dc0-2b74-4c9e-936e-160b41e74098\") " pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.627852 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flqjg\" (UniqueName: \"kubernetes.io/projected/379d2dc0-2b74-4c9e-936e-160b41e74098-kube-api-access-flqjg\") pod \"nmstate-handler-8f7l7\" (UID: \"379d2dc0-2b74-4c9e-936e-160b41e74098\") " pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.636991 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5e29e32a-6823-448d-9af0-1b4aa213a0d2-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-ztzcc\" (UID: \"5e29e32a-6823-448d-9af0-1b4aa213a0d2\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.655028 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xsv9\" (UniqueName: \"kubernetes.io/projected/5e29e32a-6823-448d-9af0-1b4aa213a0d2-kube-api-access-5xsv9\") pod \"nmstate-webhook-5f6d4c5ccb-ztzcc\" (UID: \"5e29e32a-6823-448d-9af0-1b4aa213a0d2\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.729336 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/379d2dc0-2b74-4c9e-936e-160b41e74098-dbus-socket\") pod \"nmstate-handler-8f7l7\" (UID: \"379d2dc0-2b74-4c9e-936e-160b41e74098\") " pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.729414 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flqjg\" (UniqueName: \"kubernetes.io/projected/379d2dc0-2b74-4c9e-936e-160b41e74098-kube-api-access-flqjg\") pod \"nmstate-handler-8f7l7\" (UID: \"379d2dc0-2b74-4c9e-936e-160b41e74098\") " pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.729478 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4497\" (UniqueName: \"kubernetes.io/projected/f3c59b4b-7ffa-46bb-a92c-d8ae4218335d-kube-api-access-z4497\") pod \"nmstate-metrics-7f946cbc9-znss6\" (UID: \"f3c59b4b-7ffa-46bb-a92c-d8ae4218335d\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-znss6" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.729521 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbf2w\" (UniqueName: \"kubernetes.io/projected/ef21e85d-40d5-4131-af1d-5bc35102ef29-kube-api-access-kbf2w\") pod \"nmstate-console-plugin-7fbb5f6569-9497w\" (UID: \"ef21e85d-40d5-4131-af1d-5bc35102ef29\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.729571 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef21e85d-40d5-4131-af1d-5bc35102ef29-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-9497w\" (UID: \"ef21e85d-40d5-4131-af1d-5bc35102ef29\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.729605 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ef21e85d-40d5-4131-af1d-5bc35102ef29-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-9497w\" (UID: \"ef21e85d-40d5-4131-af1d-5bc35102ef29\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.729654 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/379d2dc0-2b74-4c9e-936e-160b41e74098-ovs-socket\") pod \"nmstate-handler-8f7l7\" (UID: \"379d2dc0-2b74-4c9e-936e-160b41e74098\") " pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.729702 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/379d2dc0-2b74-4c9e-936e-160b41e74098-nmstate-lock\") pod \"nmstate-handler-8f7l7\" (UID: \"379d2dc0-2b74-4c9e-936e-160b41e74098\") " pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.729796 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/379d2dc0-2b74-4c9e-936e-160b41e74098-nmstate-lock\") pod \"nmstate-handler-8f7l7\" (UID: \"379d2dc0-2b74-4c9e-936e-160b41e74098\") " pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.729896 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/379d2dc0-2b74-4c9e-936e-160b41e74098-dbus-socket\") pod \"nmstate-handler-8f7l7\" (UID: \"379d2dc0-2b74-4c9e-936e-160b41e74098\") " pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.730012 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/379d2dc0-2b74-4c9e-936e-160b41e74098-ovs-socket\") pod \"nmstate-handler-8f7l7\" (UID: \"379d2dc0-2b74-4c9e-936e-160b41e74098\") " pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.735083 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.754999 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4497\" (UniqueName: \"kubernetes.io/projected/f3c59b4b-7ffa-46bb-a92c-d8ae4218335d-kube-api-access-z4497\") pod \"nmstate-metrics-7f946cbc9-znss6\" (UID: \"f3c59b4b-7ffa-46bb-a92c-d8ae4218335d\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-znss6" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.762705 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flqjg\" (UniqueName: \"kubernetes.io/projected/379d2dc0-2b74-4c9e-936e-160b41e74098-kube-api-access-flqjg\") pod \"nmstate-handler-8f7l7\" (UID: \"379d2dc0-2b74-4c9e-936e-160b41e74098\") " pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.765448 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-znss6" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.789368 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.790499 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.832797 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbf2w\" (UniqueName: \"kubernetes.io/projected/ef21e85d-40d5-4131-af1d-5bc35102ef29-kube-api-access-kbf2w\") pod \"nmstate-console-plugin-7fbb5f6569-9497w\" (UID: \"ef21e85d-40d5-4131-af1d-5bc35102ef29\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.832874 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef21e85d-40d5-4131-af1d-5bc35102ef29-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-9497w\" (UID: \"ef21e85d-40d5-4131-af1d-5bc35102ef29\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.832901 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ef21e85d-40d5-4131-af1d-5bc35102ef29-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-9497w\" (UID: \"ef21e85d-40d5-4131-af1d-5bc35102ef29\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.834387 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-c7c8f5b58-4swmh"] Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.835318 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ef21e85d-40d5-4131-af1d-5bc35102ef29-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-9497w\" (UID: \"ef21e85d-40d5-4131-af1d-5bc35102ef29\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.835466 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.839131 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef21e85d-40d5-4131-af1d-5bc35102ef29-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-9497w\" (UID: \"ef21e85d-40d5-4131-af1d-5bc35102ef29\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.855062 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c7c8f5b58-4swmh"] Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.897297 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbf2w\" (UniqueName: \"kubernetes.io/projected/ef21e85d-40d5-4131-af1d-5bc35102ef29-kube-api-access-kbf2w\") pod \"nmstate-console-plugin-7fbb5f6569-9497w\" (UID: \"ef21e85d-40d5-4131-af1d-5bc35102ef29\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.921590 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.934194 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3791ee27-df51-468f-b191-9904039bcba7-oauth-serving-cert\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.934248 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx5kw\" (UniqueName: \"kubernetes.io/projected/3791ee27-df51-468f-b191-9904039bcba7-kube-api-access-rx5kw\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.934296 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3791ee27-df51-468f-b191-9904039bcba7-console-oauth-config\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.934327 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3791ee27-df51-468f-b191-9904039bcba7-console-serving-cert\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.934358 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3791ee27-df51-468f-b191-9904039bcba7-console-config\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.934433 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3791ee27-df51-468f-b191-9904039bcba7-trusted-ca-bundle\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:10 crc kubenswrapper[4756]: I1203 11:07:10.934659 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3791ee27-df51-468f-b191-9904039bcba7-service-ca\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.036172 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3791ee27-df51-468f-b191-9904039bcba7-service-ca\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.036588 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3791ee27-df51-468f-b191-9904039bcba7-oauth-serving-cert\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.036623 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx5kw\" (UniqueName: \"kubernetes.io/projected/3791ee27-df51-468f-b191-9904039bcba7-kube-api-access-rx5kw\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.036667 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3791ee27-df51-468f-b191-9904039bcba7-console-oauth-config\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.037499 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3791ee27-df51-468f-b191-9904039bcba7-console-serving-cert\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.037555 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3791ee27-df51-468f-b191-9904039bcba7-service-ca\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.037561 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3791ee27-df51-468f-b191-9904039bcba7-console-config\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.037798 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3791ee27-df51-468f-b191-9904039bcba7-oauth-serving-cert\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.038060 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3791ee27-df51-468f-b191-9904039bcba7-trusted-ca-bundle\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.038236 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3791ee27-df51-468f-b191-9904039bcba7-console-config\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.040209 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3791ee27-df51-468f-b191-9904039bcba7-trusted-ca-bundle\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.043908 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3791ee27-df51-468f-b191-9904039bcba7-console-serving-cert\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.044132 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bgqw9"] Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.044867 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3791ee27-df51-468f-b191-9904039bcba7-console-oauth-config\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.057851 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx5kw\" (UniqueName: \"kubernetes.io/projected/3791ee27-df51-468f-b191-9904039bcba7-kube-api-access-rx5kw\") pod \"console-c7c8f5b58-4swmh\" (UID: \"3791ee27-df51-468f-b191-9904039bcba7\") " pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.140059 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc"] Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.194768 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.200639 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-znss6"] Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.203363 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.335109 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8f7l7" event={"ID":"379d2dc0-2b74-4c9e-936e-160b41e74098","Type":"ContainerStarted","Data":"78824c60ad381a8315dd5e27298d698bab507bb833b63910b1e01828f2474dd1"} Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.337296 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc" event={"ID":"5e29e32a-6823-448d-9af0-1b4aa213a0d2","Type":"ContainerStarted","Data":"ff6ab1d749f7b603dba6ef8339e1ee2bbb921777062e4983a4dbd5e011bad6c9"} Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.340592 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-znss6" event={"ID":"f3c59b4b-7ffa-46bb-a92c-d8ae4218335d","Type":"ContainerStarted","Data":"b96b5c09fe40f519e62e331cc9d38848940146dc240cfe875a052ef186a20cf5"} Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.398540 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c7c8f5b58-4swmh"] Dec 03 11:07:11 crc kubenswrapper[4756]: I1203 11:07:11.436985 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w"] Dec 03 11:07:11 crc kubenswrapper[4756]: W1203 11:07:11.442981 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef21e85d_40d5_4131_af1d_5bc35102ef29.slice/crio-43f345e30f5126795a80c962a4407426b51ea8cec0a302fd44fa64e6ba8d5663 WatchSource:0}: Error finding container 43f345e30f5126795a80c962a4407426b51ea8cec0a302fd44fa64e6ba8d5663: Status 404 returned error can't find the container with id 43f345e30f5126795a80c962a4407426b51ea8cec0a302fd44fa64e6ba8d5663 Dec 03 11:07:12 crc kubenswrapper[4756]: I1203 11:07:12.348045 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w" event={"ID":"ef21e85d-40d5-4131-af1d-5bc35102ef29","Type":"ContainerStarted","Data":"43f345e30f5126795a80c962a4407426b51ea8cec0a302fd44fa64e6ba8d5663"} Dec 03 11:07:12 crc kubenswrapper[4756]: I1203 11:07:12.350299 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c7c8f5b58-4swmh" event={"ID":"3791ee27-df51-468f-b191-9904039bcba7","Type":"ContainerStarted","Data":"7ea1b0c34cf91c499340e970ee9de03cf207161fcd72cd4befb5609862799bbc"} Dec 03 11:07:12 crc kubenswrapper[4756]: I1203 11:07:12.350391 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c7c8f5b58-4swmh" event={"ID":"3791ee27-df51-468f-b191-9904039bcba7","Type":"ContainerStarted","Data":"edbe9dc9ba4d6f5bb18b4ded6417d3e47bee34b8febfca80f2891a0ef68658ab"} Dec 03 11:07:12 crc kubenswrapper[4756]: I1203 11:07:12.351515 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bgqw9" podUID="adf7b234-118f-4939-ade2-e27c86f1d45e" containerName="registry-server" containerID="cri-o://57c14946dae2046ea20e5753177c2b2fd8bd6ee7bb639a115823c2768ce77006" gracePeriod=2 Dec 03 11:07:12 crc kubenswrapper[4756]: I1203 11:07:12.381010 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-c7c8f5b58-4swmh" podStartSLOduration=2.380978023 podStartE2EDuration="2.380978023s" podCreationTimestamp="2025-12-03 11:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:07:12.373251741 +0000 UTC m=+843.403252995" watchObservedRunningTime="2025-12-03 11:07:12.380978023 +0000 UTC m=+843.410979277" Dec 03 11:07:13 crc kubenswrapper[4756]: I1203 11:07:13.364733 4756 generic.go:334] "Generic (PLEG): container finished" podID="adf7b234-118f-4939-ade2-e27c86f1d45e" containerID="57c14946dae2046ea20e5753177c2b2fd8bd6ee7bb639a115823c2768ce77006" exitCode=0 Dec 03 11:07:13 crc kubenswrapper[4756]: I1203 11:07:13.364810 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgqw9" event={"ID":"adf7b234-118f-4939-ade2-e27c86f1d45e","Type":"ContainerDied","Data":"57c14946dae2046ea20e5753177c2b2fd8bd6ee7bb639a115823c2768ce77006"} Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.308013 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.373353 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgqw9" event={"ID":"adf7b234-118f-4939-ade2-e27c86f1d45e","Type":"ContainerDied","Data":"5cb474432ffeea5e53b8f7cb82ea8a247a62aee2a7992f209944b5766745afa6"} Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.373425 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgqw9" Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.373495 4756 scope.go:117] "RemoveContainer" containerID="57c14946dae2046ea20e5753177c2b2fd8bd6ee7bb639a115823c2768ce77006" Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.395515 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js622\" (UniqueName: \"kubernetes.io/projected/adf7b234-118f-4939-ade2-e27c86f1d45e-kube-api-access-js622\") pod \"adf7b234-118f-4939-ade2-e27c86f1d45e\" (UID: \"adf7b234-118f-4939-ade2-e27c86f1d45e\") " Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.395675 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf7b234-118f-4939-ade2-e27c86f1d45e-utilities\") pod \"adf7b234-118f-4939-ade2-e27c86f1d45e\" (UID: \"adf7b234-118f-4939-ade2-e27c86f1d45e\") " Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.395731 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf7b234-118f-4939-ade2-e27c86f1d45e-catalog-content\") pod \"adf7b234-118f-4939-ade2-e27c86f1d45e\" (UID: \"adf7b234-118f-4939-ade2-e27c86f1d45e\") " Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.398884 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adf7b234-118f-4939-ade2-e27c86f1d45e-utilities" (OuterVolumeSpecName: "utilities") pod "adf7b234-118f-4939-ade2-e27c86f1d45e" (UID: "adf7b234-118f-4939-ade2-e27c86f1d45e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.403612 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf7b234-118f-4939-ade2-e27c86f1d45e-kube-api-access-js622" (OuterVolumeSpecName: "kube-api-access-js622") pod "adf7b234-118f-4939-ade2-e27c86f1d45e" (UID: "adf7b234-118f-4939-ade2-e27c86f1d45e"). InnerVolumeSpecName "kube-api-access-js622". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.497977 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js622\" (UniqueName: \"kubernetes.io/projected/adf7b234-118f-4939-ade2-e27c86f1d45e-kube-api-access-js622\") on node \"crc\" DevicePath \"\"" Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.498018 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adf7b234-118f-4939-ade2-e27c86f1d45e-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.514524 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adf7b234-118f-4939-ade2-e27c86f1d45e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adf7b234-118f-4939-ade2-e27c86f1d45e" (UID: "adf7b234-118f-4939-ade2-e27c86f1d45e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.599768 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adf7b234-118f-4939-ade2-e27c86f1d45e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.710430 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bgqw9"] Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.715622 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bgqw9"] Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.858094 4756 scope.go:117] "RemoveContainer" containerID="9d1540321eab87165ece4ddbc925d86cad1243a9916cfaaf2d3a3ce47f566dc9" Dec 03 11:07:14 crc kubenswrapper[4756]: I1203 11:07:14.881335 4756 scope.go:117] "RemoveContainer" containerID="27d41fa26397085820d9dcfc5a02436ddacff2a513f4631fbb87e0bb45d0990e" Dec 03 11:07:15 crc kubenswrapper[4756]: I1203 11:07:15.241760 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf7b234-118f-4939-ade2-e27c86f1d45e" path="/var/lib/kubelet/pods/adf7b234-118f-4939-ade2-e27c86f1d45e/volumes" Dec 03 11:07:15 crc kubenswrapper[4756]: I1203 11:07:15.379663 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc" event={"ID":"5e29e32a-6823-448d-9af0-1b4aa213a0d2","Type":"ContainerStarted","Data":"9821ba317db7efa7dad627dbead5c66f93b59f436b83e4cc43902a452d1ad16c"} Dec 03 11:07:15 crc kubenswrapper[4756]: I1203 11:07:15.379773 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc" Dec 03 11:07:15 crc kubenswrapper[4756]: I1203 11:07:15.380860 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w" event={"ID":"ef21e85d-40d5-4131-af1d-5bc35102ef29","Type":"ContainerStarted","Data":"44660c360d17a1b2717c5f5c016f464700cfa67f034ac0aa5e5a7f5e8b52c5a6"} Dec 03 11:07:15 crc kubenswrapper[4756]: I1203 11:07:15.382506 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-znss6" event={"ID":"f3c59b4b-7ffa-46bb-a92c-d8ae4218335d","Type":"ContainerStarted","Data":"0010110e11119ba00c2bb8f4b7c19f78f72e35de59960766d38e4ce1445e0ff5"} Dec 03 11:07:15 crc kubenswrapper[4756]: I1203 11:07:15.386010 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8f7l7" event={"ID":"379d2dc0-2b74-4c9e-936e-160b41e74098","Type":"ContainerStarted","Data":"b4524eadc519b7f9e70c24b38a38f11575e7fd597a843c764c4107477796f5d5"} Dec 03 11:07:15 crc kubenswrapper[4756]: I1203 11:07:15.386566 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:15 crc kubenswrapper[4756]: I1203 11:07:15.410748 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc" podStartSLOduration=1.6282947 podStartE2EDuration="5.410718402s" podCreationTimestamp="2025-12-03 11:07:10 +0000 UTC" firstStartedPulling="2025-12-03 11:07:11.148889468 +0000 UTC m=+842.178890712" lastFinishedPulling="2025-12-03 11:07:14.93131317 +0000 UTC m=+845.961314414" observedRunningTime="2025-12-03 11:07:15.403384201 +0000 UTC m=+846.433385455" watchObservedRunningTime="2025-12-03 11:07:15.410718402 +0000 UTC m=+846.440719646" Dec 03 11:07:15 crc kubenswrapper[4756]: I1203 11:07:15.447240 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8f7l7" podStartSLOduration=1.439296561 podStartE2EDuration="5.447212527s" podCreationTimestamp="2025-12-03 11:07:10 +0000 UTC" firstStartedPulling="2025-12-03 11:07:10.905624706 +0000 UTC m=+841.935625950" lastFinishedPulling="2025-12-03 11:07:14.913540672 +0000 UTC m=+845.943541916" observedRunningTime="2025-12-03 11:07:15.442638139 +0000 UTC m=+846.472639393" watchObservedRunningTime="2025-12-03 11:07:15.447212527 +0000 UTC m=+846.477213781" Dec 03 11:07:18 crc kubenswrapper[4756]: I1203 11:07:18.407700 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-znss6" event={"ID":"f3c59b4b-7ffa-46bb-a92c-d8ae4218335d","Type":"ContainerStarted","Data":"a82cd90bb5161c5cfc62d14f19fb431e48e15b386daa1d28fe800d0d12e7b4af"} Dec 03 11:07:18 crc kubenswrapper[4756]: I1203 11:07:18.424438 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-znss6" podStartSLOduration=2.266327138 podStartE2EDuration="8.424419522s" podCreationTimestamp="2025-12-03 11:07:10 +0000 UTC" firstStartedPulling="2025-12-03 11:07:11.214780116 +0000 UTC m=+842.244781360" lastFinishedPulling="2025-12-03 11:07:17.37287249 +0000 UTC m=+848.402873744" observedRunningTime="2025-12-03 11:07:18.423600707 +0000 UTC m=+849.453601971" watchObservedRunningTime="2025-12-03 11:07:18.424419522 +0000 UTC m=+849.454420766" Dec 03 11:07:18 crc kubenswrapper[4756]: I1203 11:07:18.428485 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-9497w" podStartSLOduration=4.960430956 podStartE2EDuration="8.428469274s" podCreationTimestamp="2025-12-03 11:07:10 +0000 UTC" firstStartedPulling="2025-12-03 11:07:11.445500564 +0000 UTC m=+842.475501798" lastFinishedPulling="2025-12-03 11:07:14.913538872 +0000 UTC m=+845.943540116" observedRunningTime="2025-12-03 11:07:15.462711627 +0000 UTC m=+846.492712881" watchObservedRunningTime="2025-12-03 11:07:18.428469274 +0000 UTC m=+849.458470518" Dec 03 11:07:20 crc kubenswrapper[4756]: I1203 11:07:20.815134 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8f7l7" Dec 03 11:07:21 crc kubenswrapper[4756]: I1203 11:07:21.205007 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:21 crc kubenswrapper[4756]: I1203 11:07:21.205064 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:21 crc kubenswrapper[4756]: I1203 11:07:21.212555 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:21 crc kubenswrapper[4756]: I1203 11:07:21.432278 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-c7c8f5b58-4swmh" Dec 03 11:07:21 crc kubenswrapper[4756]: I1203 11:07:21.495688 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hf4d2"] Dec 03 11:07:30 crc kubenswrapper[4756]: I1203 11:07:30.742193 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-ztzcc" Dec 03 11:07:46 crc kubenswrapper[4756]: I1203 11:07:46.561017 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-hf4d2" podUID="0e6b1897-6a37-4181-bcb1-876d1205f2ab" containerName="console" containerID="cri-o://bed06d8e00fade33094249178a5628458c11aa8e6c979427a8dd2a36576abc6c" gracePeriod=15 Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.165053 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hf4d2_0e6b1897-6a37-4181-bcb1-876d1205f2ab/console/0.log" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.165888 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.292371 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-service-ca\") pod \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.292903 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jngt\" (UniqueName: \"kubernetes.io/projected/0e6b1897-6a37-4181-bcb1-876d1205f2ab-kube-api-access-4jngt\") pod \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.293008 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-serving-cert\") pod \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.293060 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-config\") pod \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.293093 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-oauth-config\") pod \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.293129 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-oauth-serving-cert\") pod \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.293552 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-service-ca" (OuterVolumeSpecName: "service-ca") pod "0e6b1897-6a37-4181-bcb1-876d1205f2ab" (UID: "0e6b1897-6a37-4181-bcb1-876d1205f2ab"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.293709 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-trusted-ca-bundle\") pod \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\" (UID: \"0e6b1897-6a37-4181-bcb1-876d1205f2ab\") " Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.294176 4756 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.294324 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0e6b1897-6a37-4181-bcb1-876d1205f2ab" (UID: "0e6b1897-6a37-4181-bcb1-876d1205f2ab"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.294406 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-config" (OuterVolumeSpecName: "console-config") pod "0e6b1897-6a37-4181-bcb1-876d1205f2ab" (UID: "0e6b1897-6a37-4181-bcb1-876d1205f2ab"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.294884 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0e6b1897-6a37-4181-bcb1-876d1205f2ab" (UID: "0e6b1897-6a37-4181-bcb1-876d1205f2ab"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.303595 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e6b1897-6a37-4181-bcb1-876d1205f2ab-kube-api-access-4jngt" (OuterVolumeSpecName: "kube-api-access-4jngt") pod "0e6b1897-6a37-4181-bcb1-876d1205f2ab" (UID: "0e6b1897-6a37-4181-bcb1-876d1205f2ab"). InnerVolumeSpecName "kube-api-access-4jngt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.303617 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0e6b1897-6a37-4181-bcb1-876d1205f2ab" (UID: "0e6b1897-6a37-4181-bcb1-876d1205f2ab"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.303834 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0e6b1897-6a37-4181-bcb1-876d1205f2ab" (UID: "0e6b1897-6a37-4181-bcb1-876d1205f2ab"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.395205 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.395275 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jngt\" (UniqueName: \"kubernetes.io/projected/0e6b1897-6a37-4181-bcb1-876d1205f2ab-kube-api-access-4jngt\") on node \"crc\" DevicePath \"\"" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.395288 4756 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.395299 4756 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.395308 4756 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0e6b1897-6a37-4181-bcb1-876d1205f2ab-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.395317 4756 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0e6b1897-6a37-4181-bcb1-876d1205f2ab-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.633468 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hf4d2_0e6b1897-6a37-4181-bcb1-876d1205f2ab/console/0.log" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.633870 4756 generic.go:334] "Generic (PLEG): container finished" podID="0e6b1897-6a37-4181-bcb1-876d1205f2ab" containerID="bed06d8e00fade33094249178a5628458c11aa8e6c979427a8dd2a36576abc6c" exitCode=2 Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.633914 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hf4d2" event={"ID":"0e6b1897-6a37-4181-bcb1-876d1205f2ab","Type":"ContainerDied","Data":"bed06d8e00fade33094249178a5628458c11aa8e6c979427a8dd2a36576abc6c"} Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.634012 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hf4d2" event={"ID":"0e6b1897-6a37-4181-bcb1-876d1205f2ab","Type":"ContainerDied","Data":"dddbe53dfdcb1427b08dd5f467a9b73082d0f9cc7df722db13d23bfe7054e845"} Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.634041 4756 scope.go:117] "RemoveContainer" containerID="bed06d8e00fade33094249178a5628458c11aa8e6c979427a8dd2a36576abc6c" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.634052 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hf4d2" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.671624 4756 scope.go:117] "RemoveContainer" containerID="bed06d8e00fade33094249178a5628458c11aa8e6c979427a8dd2a36576abc6c" Dec 03 11:07:47 crc kubenswrapper[4756]: E1203 11:07:47.672474 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed06d8e00fade33094249178a5628458c11aa8e6c979427a8dd2a36576abc6c\": container with ID starting with bed06d8e00fade33094249178a5628458c11aa8e6c979427a8dd2a36576abc6c not found: ID does not exist" containerID="bed06d8e00fade33094249178a5628458c11aa8e6c979427a8dd2a36576abc6c" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.672525 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed06d8e00fade33094249178a5628458c11aa8e6c979427a8dd2a36576abc6c"} err="failed to get container status \"bed06d8e00fade33094249178a5628458c11aa8e6c979427a8dd2a36576abc6c\": rpc error: code = NotFound desc = could not find container \"bed06d8e00fade33094249178a5628458c11aa8e6c979427a8dd2a36576abc6c\": container with ID starting with bed06d8e00fade33094249178a5628458c11aa8e6c979427a8dd2a36576abc6c not found: ID does not exist" Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.672571 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hf4d2"] Dec 03 11:07:47 crc kubenswrapper[4756]: I1203 11:07:47.678160 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-hf4d2"] Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.249975 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e6b1897-6a37-4181-bcb1-876d1205f2ab" path="/var/lib/kubelet/pods/0e6b1897-6a37-4181-bcb1-876d1205f2ab/volumes" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.401658 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84"] Dec 03 11:07:49 crc kubenswrapper[4756]: E1203 11:07:49.402198 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e6b1897-6a37-4181-bcb1-876d1205f2ab" containerName="console" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.402280 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e6b1897-6a37-4181-bcb1-876d1205f2ab" containerName="console" Dec 03 11:07:49 crc kubenswrapper[4756]: E1203 11:07:49.402339 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf7b234-118f-4939-ade2-e27c86f1d45e" containerName="extract-utilities" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.402386 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf7b234-118f-4939-ade2-e27c86f1d45e" containerName="extract-utilities" Dec 03 11:07:49 crc kubenswrapper[4756]: E1203 11:07:49.402445 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf7b234-118f-4939-ade2-e27c86f1d45e" containerName="extract-content" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.402497 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf7b234-118f-4939-ade2-e27c86f1d45e" containerName="extract-content" Dec 03 11:07:49 crc kubenswrapper[4756]: E1203 11:07:49.402563 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf7b234-118f-4939-ade2-e27c86f1d45e" containerName="registry-server" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.402612 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf7b234-118f-4939-ade2-e27c86f1d45e" containerName="registry-server" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.402771 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf7b234-118f-4939-ade2-e27c86f1d45e" containerName="registry-server" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.402830 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e6b1897-6a37-4181-bcb1-876d1205f2ab" containerName="console" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.403781 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.405758 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.422327 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84"] Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.529706 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84\" (UID: \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.530370 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-949lt\" (UniqueName: \"kubernetes.io/projected/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-kube-api-access-949lt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84\" (UID: \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.530433 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84\" (UID: \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.632208 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84\" (UID: \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.632664 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-949lt\" (UniqueName: \"kubernetes.io/projected/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-kube-api-access-949lt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84\" (UID: \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.632857 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84\" (UID: \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.633140 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84\" (UID: \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.633457 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84\" (UID: \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.655716 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-949lt\" (UniqueName: \"kubernetes.io/projected/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-kube-api-access-949lt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84\" (UID: \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.719550 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" Dec 03 11:07:49 crc kubenswrapper[4756]: I1203 11:07:49.923468 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84"] Dec 03 11:07:50 crc kubenswrapper[4756]: I1203 11:07:50.656454 4756 generic.go:334] "Generic (PLEG): container finished" podID="c8b2a0d1-86e1-4b61-92df-c41e299a3f58" containerID="1dd9990a67f4a414405506fcff8be42c09d69126388b890cc5587f58c75a210b" exitCode=0 Dec 03 11:07:50 crc kubenswrapper[4756]: I1203 11:07:50.656531 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" event={"ID":"c8b2a0d1-86e1-4b61-92df-c41e299a3f58","Type":"ContainerDied","Data":"1dd9990a67f4a414405506fcff8be42c09d69126388b890cc5587f58c75a210b"} Dec 03 11:07:50 crc kubenswrapper[4756]: I1203 11:07:50.657458 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" event={"ID":"c8b2a0d1-86e1-4b61-92df-c41e299a3f58","Type":"ContainerStarted","Data":"fa49b1c4af7b63b0f05b02cdadb079f283646b9755f83087f4fa50aa04a4bac3"} Dec 03 11:07:55 crc kubenswrapper[4756]: I1203 11:07:55.695890 4756 generic.go:334] "Generic (PLEG): container finished" podID="c8b2a0d1-86e1-4b61-92df-c41e299a3f58" containerID="eb6b962b309355bc0a9fedfb1816957b44a8dc885e167174081e6b1bd29b41d7" exitCode=0 Dec 03 11:07:55 crc kubenswrapper[4756]: I1203 11:07:55.696008 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" event={"ID":"c8b2a0d1-86e1-4b61-92df-c41e299a3f58","Type":"ContainerDied","Data":"eb6b962b309355bc0a9fedfb1816957b44a8dc885e167174081e6b1bd29b41d7"} Dec 03 11:07:56 crc kubenswrapper[4756]: I1203 11:07:56.712487 4756 generic.go:334] "Generic (PLEG): container finished" podID="c8b2a0d1-86e1-4b61-92df-c41e299a3f58" containerID="bbef36c3d757f85a34a076b9cc5a7fdb40660aaf0582cebccf29e273e9d96a1f" exitCode=0 Dec 03 11:07:56 crc kubenswrapper[4756]: I1203 11:07:56.712620 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" event={"ID":"c8b2a0d1-86e1-4b61-92df-c41e299a3f58","Type":"ContainerDied","Data":"bbef36c3d757f85a34a076b9cc5a7fdb40660aaf0582cebccf29e273e9d96a1f"} Dec 03 11:07:57 crc kubenswrapper[4756]: I1203 11:07:57.974123 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" Dec 03 11:07:58 crc kubenswrapper[4756]: I1203 11:07:58.168999 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-util\") pod \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\" (UID: \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\") " Dec 03 11:07:58 crc kubenswrapper[4756]: I1203 11:07:58.169130 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-949lt\" (UniqueName: \"kubernetes.io/projected/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-kube-api-access-949lt\") pod \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\" (UID: \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\") " Dec 03 11:07:58 crc kubenswrapper[4756]: I1203 11:07:58.169189 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-bundle\") pod \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\" (UID: \"c8b2a0d1-86e1-4b61-92df-c41e299a3f58\") " Dec 03 11:07:58 crc kubenswrapper[4756]: I1203 11:07:58.170387 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-bundle" (OuterVolumeSpecName: "bundle") pod "c8b2a0d1-86e1-4b61-92df-c41e299a3f58" (UID: "c8b2a0d1-86e1-4b61-92df-c41e299a3f58"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:07:58 crc kubenswrapper[4756]: I1203 11:07:58.176367 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-kube-api-access-949lt" (OuterVolumeSpecName: "kube-api-access-949lt") pod "c8b2a0d1-86e1-4b61-92df-c41e299a3f58" (UID: "c8b2a0d1-86e1-4b61-92df-c41e299a3f58"). InnerVolumeSpecName "kube-api-access-949lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:07:58 crc kubenswrapper[4756]: I1203 11:07:58.179701 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-util" (OuterVolumeSpecName: "util") pod "c8b2a0d1-86e1-4b61-92df-c41e299a3f58" (UID: "c8b2a0d1-86e1-4b61-92df-c41e299a3f58"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:07:58 crc kubenswrapper[4756]: I1203 11:07:58.271018 4756 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-util\") on node \"crc\" DevicePath \"\"" Dec 03 11:07:58 crc kubenswrapper[4756]: I1203 11:07:58.271091 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-949lt\" (UniqueName: \"kubernetes.io/projected/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-kube-api-access-949lt\") on node \"crc\" DevicePath \"\"" Dec 03 11:07:58 crc kubenswrapper[4756]: I1203 11:07:58.271120 4756 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8b2a0d1-86e1-4b61-92df-c41e299a3f58-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:07:58 crc kubenswrapper[4756]: I1203 11:07:58.734248 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" event={"ID":"c8b2a0d1-86e1-4b61-92df-c41e299a3f58","Type":"ContainerDied","Data":"fa49b1c4af7b63b0f05b02cdadb079f283646b9755f83087f4fa50aa04a4bac3"} Dec 03 11:07:58 crc kubenswrapper[4756]: I1203 11:07:58.734310 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa49b1c4af7b63b0f05b02cdadb079f283646b9755f83087f4fa50aa04a4bac3" Dec 03 11:07:58 crc kubenswrapper[4756]: I1203 11:07:58.734371 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.543238 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6"] Dec 03 11:08:07 crc kubenswrapper[4756]: E1203 11:08:07.544390 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b2a0d1-86e1-4b61-92df-c41e299a3f58" containerName="util" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.544406 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b2a0d1-86e1-4b61-92df-c41e299a3f58" containerName="util" Dec 03 11:08:07 crc kubenswrapper[4756]: E1203 11:08:07.544417 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b2a0d1-86e1-4b61-92df-c41e299a3f58" containerName="extract" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.544424 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b2a0d1-86e1-4b61-92df-c41e299a3f58" containerName="extract" Dec 03 11:08:07 crc kubenswrapper[4756]: E1203 11:08:07.544432 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b2a0d1-86e1-4b61-92df-c41e299a3f58" containerName="pull" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.544437 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b2a0d1-86e1-4b61-92df-c41e299a3f58" containerName="pull" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.544537 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b2a0d1-86e1-4b61-92df-c41e299a3f58" containerName="extract" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.545037 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.550275 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4j5q\" (UniqueName: \"kubernetes.io/projected/ca73ab13-f16d-40fe-b7a1-f2c5b93e7456-kube-api-access-g4j5q\") pod \"metallb-operator-controller-manager-fdcbcf598-mh4g6\" (UID: \"ca73ab13-f16d-40fe-b7a1-f2c5b93e7456\") " pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.550369 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca73ab13-f16d-40fe-b7a1-f2c5b93e7456-webhook-cert\") pod \"metallb-operator-controller-manager-fdcbcf598-mh4g6\" (UID: \"ca73ab13-f16d-40fe-b7a1-f2c5b93e7456\") " pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.550393 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca73ab13-f16d-40fe-b7a1-f2c5b93e7456-apiservice-cert\") pod \"metallb-operator-controller-manager-fdcbcf598-mh4g6\" (UID: \"ca73ab13-f16d-40fe-b7a1-f2c5b93e7456\") " pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.552393 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.552493 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.552519 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.553696 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-z5z8j" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.553905 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.559660 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6"] Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.651259 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca73ab13-f16d-40fe-b7a1-f2c5b93e7456-webhook-cert\") pod \"metallb-operator-controller-manager-fdcbcf598-mh4g6\" (UID: \"ca73ab13-f16d-40fe-b7a1-f2c5b93e7456\") " pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.651324 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca73ab13-f16d-40fe-b7a1-f2c5b93e7456-apiservice-cert\") pod \"metallb-operator-controller-manager-fdcbcf598-mh4g6\" (UID: \"ca73ab13-f16d-40fe-b7a1-f2c5b93e7456\") " pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.651361 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4j5q\" (UniqueName: \"kubernetes.io/projected/ca73ab13-f16d-40fe-b7a1-f2c5b93e7456-kube-api-access-g4j5q\") pod \"metallb-operator-controller-manager-fdcbcf598-mh4g6\" (UID: \"ca73ab13-f16d-40fe-b7a1-f2c5b93e7456\") " pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.657687 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca73ab13-f16d-40fe-b7a1-f2c5b93e7456-apiservice-cert\") pod \"metallb-operator-controller-manager-fdcbcf598-mh4g6\" (UID: \"ca73ab13-f16d-40fe-b7a1-f2c5b93e7456\") " pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.662627 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca73ab13-f16d-40fe-b7a1-f2c5b93e7456-webhook-cert\") pod \"metallb-operator-controller-manager-fdcbcf598-mh4g6\" (UID: \"ca73ab13-f16d-40fe-b7a1-f2c5b93e7456\") " pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.672430 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4j5q\" (UniqueName: \"kubernetes.io/projected/ca73ab13-f16d-40fe-b7a1-f2c5b93e7456-kube-api-access-g4j5q\") pod \"metallb-operator-controller-manager-fdcbcf598-mh4g6\" (UID: \"ca73ab13-f16d-40fe-b7a1-f2c5b93e7456\") " pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.866745 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.911267 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb"] Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.912346 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.915456 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.916014 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-hfdxs" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.916225 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 03 11:08:07 crc kubenswrapper[4756]: I1203 11:08:07.935666 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb"] Dec 03 11:08:08 crc kubenswrapper[4756]: I1203 11:08:08.062134 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a1c84bb-c40c-4af4-80a4-75991c028724-apiservice-cert\") pod \"metallb-operator-webhook-server-6b69d8d987-v5lqb\" (UID: \"3a1c84bb-c40c-4af4-80a4-75991c028724\") " pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" Dec 03 11:08:08 crc kubenswrapper[4756]: I1203 11:08:08.062189 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a1c84bb-c40c-4af4-80a4-75991c028724-webhook-cert\") pod \"metallb-operator-webhook-server-6b69d8d987-v5lqb\" (UID: \"3a1c84bb-c40c-4af4-80a4-75991c028724\") " pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" Dec 03 11:08:08 crc kubenswrapper[4756]: I1203 11:08:08.063249 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr2t5\" (UniqueName: \"kubernetes.io/projected/3a1c84bb-c40c-4af4-80a4-75991c028724-kube-api-access-kr2t5\") pod \"metallb-operator-webhook-server-6b69d8d987-v5lqb\" (UID: \"3a1c84bb-c40c-4af4-80a4-75991c028724\") " pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" Dec 03 11:08:08 crc kubenswrapper[4756]: I1203 11:08:08.167879 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr2t5\" (UniqueName: \"kubernetes.io/projected/3a1c84bb-c40c-4af4-80a4-75991c028724-kube-api-access-kr2t5\") pod \"metallb-operator-webhook-server-6b69d8d987-v5lqb\" (UID: \"3a1c84bb-c40c-4af4-80a4-75991c028724\") " pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" Dec 03 11:08:08 crc kubenswrapper[4756]: I1203 11:08:08.168063 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a1c84bb-c40c-4af4-80a4-75991c028724-apiservice-cert\") pod \"metallb-operator-webhook-server-6b69d8d987-v5lqb\" (UID: \"3a1c84bb-c40c-4af4-80a4-75991c028724\") " pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" Dec 03 11:08:08 crc kubenswrapper[4756]: I1203 11:08:08.168084 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a1c84bb-c40c-4af4-80a4-75991c028724-webhook-cert\") pod \"metallb-operator-webhook-server-6b69d8d987-v5lqb\" (UID: \"3a1c84bb-c40c-4af4-80a4-75991c028724\") " pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" Dec 03 11:08:08 crc kubenswrapper[4756]: I1203 11:08:08.177041 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a1c84bb-c40c-4af4-80a4-75991c028724-apiservice-cert\") pod \"metallb-operator-webhook-server-6b69d8d987-v5lqb\" (UID: \"3a1c84bb-c40c-4af4-80a4-75991c028724\") " pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" Dec 03 11:08:08 crc kubenswrapper[4756]: I1203 11:08:08.188781 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a1c84bb-c40c-4af4-80a4-75991c028724-webhook-cert\") pod \"metallb-operator-webhook-server-6b69d8d987-v5lqb\" (UID: \"3a1c84bb-c40c-4af4-80a4-75991c028724\") " pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" Dec 03 11:08:08 crc kubenswrapper[4756]: I1203 11:08:08.191005 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr2t5\" (UniqueName: \"kubernetes.io/projected/3a1c84bb-c40c-4af4-80a4-75991c028724-kube-api-access-kr2t5\") pod \"metallb-operator-webhook-server-6b69d8d987-v5lqb\" (UID: \"3a1c84bb-c40c-4af4-80a4-75991c028724\") " pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" Dec 03 11:08:08 crc kubenswrapper[4756]: I1203 11:08:08.243241 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" Dec 03 11:08:08 crc kubenswrapper[4756]: I1203 11:08:08.245886 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6"] Dec 03 11:08:08 crc kubenswrapper[4756]: I1203 11:08:08.760186 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb"] Dec 03 11:08:08 crc kubenswrapper[4756]: W1203 11:08:08.790698 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a1c84bb_c40c_4af4_80a4_75991c028724.slice/crio-57bd729663f54994485ca37768e4f80db22c4a88fc4a27e42fdafacccf88f829 WatchSource:0}: Error finding container 57bd729663f54994485ca37768e4f80db22c4a88fc4a27e42fdafacccf88f829: Status 404 returned error can't find the container with id 57bd729663f54994485ca37768e4f80db22c4a88fc4a27e42fdafacccf88f829 Dec 03 11:08:08 crc kubenswrapper[4756]: I1203 11:08:08.807555 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" event={"ID":"ca73ab13-f16d-40fe-b7a1-f2c5b93e7456","Type":"ContainerStarted","Data":"dd56e77a087743f9387dd1377129d212ad4f5f91fa5fa48c7d7cb03c7faceba2"} Dec 03 11:08:09 crc kubenswrapper[4756]: I1203 11:08:09.824667 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" event={"ID":"3a1c84bb-c40c-4af4-80a4-75991c028724","Type":"ContainerStarted","Data":"57bd729663f54994485ca37768e4f80db22c4a88fc4a27e42fdafacccf88f829"} Dec 03 11:08:11 crc kubenswrapper[4756]: I1203 11:08:11.837415 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" event={"ID":"ca73ab13-f16d-40fe-b7a1-f2c5b93e7456","Type":"ContainerStarted","Data":"a36688768e15e9e30e739e2c80f69f2d0151cf416ed651bb640ba9658c7896da"} Dec 03 11:08:11 crc kubenswrapper[4756]: I1203 11:08:11.838352 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" Dec 03 11:08:11 crc kubenswrapper[4756]: I1203 11:08:11.858166 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" podStartSLOduration=1.845195236 podStartE2EDuration="4.858145254s" podCreationTimestamp="2025-12-03 11:08:07 +0000 UTC" firstStartedPulling="2025-12-03 11:08:08.257722053 +0000 UTC m=+899.287723297" lastFinishedPulling="2025-12-03 11:08:11.270672071 +0000 UTC m=+902.300673315" observedRunningTime="2025-12-03 11:08:11.855003269 +0000 UTC m=+902.885004513" watchObservedRunningTime="2025-12-03 11:08:11.858145254 +0000 UTC m=+902.888146498" Dec 03 11:08:13 crc kubenswrapper[4756]: I1203 11:08:13.856468 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" event={"ID":"3a1c84bb-c40c-4af4-80a4-75991c028724","Type":"ContainerStarted","Data":"f4536a1d29fa8a3762d5f5f6966cf1fd5c5cf831b859274f649887d46b99556d"} Dec 03 11:08:13 crc kubenswrapper[4756]: I1203 11:08:13.857017 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" Dec 03 11:08:13 crc kubenswrapper[4756]: I1203 11:08:13.882024 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" podStartSLOduration=2.074521029 podStartE2EDuration="6.881991799s" podCreationTimestamp="2025-12-03 11:08:07 +0000 UTC" firstStartedPulling="2025-12-03 11:08:08.80398526 +0000 UTC m=+899.833986504" lastFinishedPulling="2025-12-03 11:08:13.61145604 +0000 UTC m=+904.641457274" observedRunningTime="2025-12-03 11:08:13.878844844 +0000 UTC m=+904.908846108" watchObservedRunningTime="2025-12-03 11:08:13.881991799 +0000 UTC m=+904.911993043" Dec 03 11:08:28 crc kubenswrapper[4756]: I1203 11:08:28.251058 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6b69d8d987-v5lqb" Dec 03 11:08:47 crc kubenswrapper[4756]: I1203 11:08:47.870618 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-fdcbcf598-mh4g6" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.763288 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-7n2sf"] Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.766358 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.768624 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.768731 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-94p6n" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.777212 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.780229 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq"] Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.781219 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.786320 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.795599 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq"] Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.847627 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-metrics-certs\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.847714 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssvrt\" (UniqueName: \"kubernetes.io/projected/4686b721-17d6-4951-9107-81ba7c5f1658-kube-api-access-ssvrt\") pod \"frr-k8s-webhook-server-7fcb986d4-v6jcq\" (UID: \"4686b721-17d6-4951-9107-81ba7c5f1658\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.847755 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltph9\" (UniqueName: \"kubernetes.io/projected/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-kube-api-access-ltph9\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.847778 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-frr-conf\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.847806 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4686b721-17d6-4951-9107-81ba7c5f1658-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-v6jcq\" (UID: \"4686b721-17d6-4951-9107-81ba7c5f1658\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.848113 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-metrics\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.848194 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-reloader\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.848258 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-frr-sockets\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.848341 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-frr-startup\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.879485 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7k2pt"] Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.881227 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7k2pt" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.884074 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.884657 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.884752 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.885350 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-9k5mx" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.900413 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-bq2xc"] Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.901530 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-bq2xc" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.903581 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.927719 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-bq2xc"] Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950143 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-frr-sockets\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950220 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-frr-startup\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950254 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-metrics-certs\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950296 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssvrt\" (UniqueName: \"kubernetes.io/projected/4686b721-17d6-4951-9107-81ba7c5f1658-kube-api-access-ssvrt\") pod \"frr-k8s-webhook-server-7fcb986d4-v6jcq\" (UID: \"4686b721-17d6-4951-9107-81ba7c5f1658\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950329 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltph9\" (UniqueName: \"kubernetes.io/projected/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-kube-api-access-ltph9\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950353 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-frr-conf\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950379 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbcde851-76fe-4be1-ae50-ed62ebcc75a3-cert\") pod \"controller-f8648f98b-bq2xc\" (UID: \"dbcde851-76fe-4be1-ae50-ed62ebcc75a3\") " pod="metallb-system/controller-f8648f98b-bq2xc" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950409 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4686b721-17d6-4951-9107-81ba7c5f1658-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-v6jcq\" (UID: \"4686b721-17d6-4951-9107-81ba7c5f1658\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950442 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dec16985-cd5e-425b-a72d-9a13e835c965-metrics-certs\") pod \"speaker-7k2pt\" (UID: \"dec16985-cd5e-425b-a72d-9a13e835c965\") " pod="metallb-system/speaker-7k2pt" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950465 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dec16985-cd5e-425b-a72d-9a13e835c965-memberlist\") pod \"speaker-7k2pt\" (UID: \"dec16985-cd5e-425b-a72d-9a13e835c965\") " pod="metallb-system/speaker-7k2pt" Dec 03 11:08:48 crc kubenswrapper[4756]: E1203 11:08:48.950596 4756 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 03 11:08:48 crc kubenswrapper[4756]: E1203 11:08:48.950712 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-metrics-certs podName:ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf nodeName:}" failed. No retries permitted until 2025-12-03 11:08:49.450685654 +0000 UTC m=+940.480687068 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-metrics-certs") pod "frr-k8s-7n2sf" (UID: "ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf") : secret "frr-k8s-certs-secret" not found Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950750 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvwhq\" (UniqueName: \"kubernetes.io/projected/dec16985-cd5e-425b-a72d-9a13e835c965-kube-api-access-cvwhq\") pod \"speaker-7k2pt\" (UID: \"dec16985-cd5e-425b-a72d-9a13e835c965\") " pod="metallb-system/speaker-7k2pt" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950807 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwvbq\" (UniqueName: \"kubernetes.io/projected/dbcde851-76fe-4be1-ae50-ed62ebcc75a3-kube-api-access-zwvbq\") pod \"controller-f8648f98b-bq2xc\" (UID: \"dbcde851-76fe-4be1-ae50-ed62ebcc75a3\") " pod="metallb-system/controller-f8648f98b-bq2xc" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950851 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-metrics\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950910 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-reloader\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950969 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbcde851-76fe-4be1-ae50-ed62ebcc75a3-metrics-certs\") pod \"controller-f8648f98b-bq2xc\" (UID: \"dbcde851-76fe-4be1-ae50-ed62ebcc75a3\") " pod="metallb-system/controller-f8648f98b-bq2xc" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.950997 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dec16985-cd5e-425b-a72d-9a13e835c965-metallb-excludel2\") pod \"speaker-7k2pt\" (UID: \"dec16985-cd5e-425b-a72d-9a13e835c965\") " pod="metallb-system/speaker-7k2pt" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.951048 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-frr-conf\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.951243 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-metrics\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.951420 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-reloader\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.951452 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-frr-sockets\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.951768 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-frr-startup\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.978495 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssvrt\" (UniqueName: \"kubernetes.io/projected/4686b721-17d6-4951-9107-81ba7c5f1658-kube-api-access-ssvrt\") pod \"frr-k8s-webhook-server-7fcb986d4-v6jcq\" (UID: \"4686b721-17d6-4951-9107-81ba7c5f1658\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.978524 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltph9\" (UniqueName: \"kubernetes.io/projected/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-kube-api-access-ltph9\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:48 crc kubenswrapper[4756]: I1203 11:08:48.983509 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4686b721-17d6-4951-9107-81ba7c5f1658-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-v6jcq\" (UID: \"4686b721-17d6-4951-9107-81ba7c5f1658\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.052892 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbcde851-76fe-4be1-ae50-ed62ebcc75a3-cert\") pod \"controller-f8648f98b-bq2xc\" (UID: \"dbcde851-76fe-4be1-ae50-ed62ebcc75a3\") " pod="metallb-system/controller-f8648f98b-bq2xc" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.052990 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dec16985-cd5e-425b-a72d-9a13e835c965-metrics-certs\") pod \"speaker-7k2pt\" (UID: \"dec16985-cd5e-425b-a72d-9a13e835c965\") " pod="metallb-system/speaker-7k2pt" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.053019 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dec16985-cd5e-425b-a72d-9a13e835c965-memberlist\") pod \"speaker-7k2pt\" (UID: \"dec16985-cd5e-425b-a72d-9a13e835c965\") " pod="metallb-system/speaker-7k2pt" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.053050 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvwhq\" (UniqueName: \"kubernetes.io/projected/dec16985-cd5e-425b-a72d-9a13e835c965-kube-api-access-cvwhq\") pod \"speaker-7k2pt\" (UID: \"dec16985-cd5e-425b-a72d-9a13e835c965\") " pod="metallb-system/speaker-7k2pt" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.053072 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwvbq\" (UniqueName: \"kubernetes.io/projected/dbcde851-76fe-4be1-ae50-ed62ebcc75a3-kube-api-access-zwvbq\") pod \"controller-f8648f98b-bq2xc\" (UID: \"dbcde851-76fe-4be1-ae50-ed62ebcc75a3\") " pod="metallb-system/controller-f8648f98b-bq2xc" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.053104 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbcde851-76fe-4be1-ae50-ed62ebcc75a3-metrics-certs\") pod \"controller-f8648f98b-bq2xc\" (UID: \"dbcde851-76fe-4be1-ae50-ed62ebcc75a3\") " pod="metallb-system/controller-f8648f98b-bq2xc" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.053124 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dec16985-cd5e-425b-a72d-9a13e835c965-metallb-excludel2\") pod \"speaker-7k2pt\" (UID: \"dec16985-cd5e-425b-a72d-9a13e835c965\") " pod="metallb-system/speaker-7k2pt" Dec 03 11:08:49 crc kubenswrapper[4756]: E1203 11:08:49.053277 4756 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 03 11:08:49 crc kubenswrapper[4756]: E1203 11:08:49.053406 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dec16985-cd5e-425b-a72d-9a13e835c965-memberlist podName:dec16985-cd5e-425b-a72d-9a13e835c965 nodeName:}" failed. No retries permitted until 2025-12-03 11:08:49.553371743 +0000 UTC m=+940.583372987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/dec16985-cd5e-425b-a72d-9a13e835c965-memberlist") pod "speaker-7k2pt" (UID: "dec16985-cd5e-425b-a72d-9a13e835c965") : secret "metallb-memberlist" not found Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.054210 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dec16985-cd5e-425b-a72d-9a13e835c965-metallb-excludel2\") pod \"speaker-7k2pt\" (UID: \"dec16985-cd5e-425b-a72d-9a13e835c965\") " pod="metallb-system/speaker-7k2pt" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.055770 4756 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.059578 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dec16985-cd5e-425b-a72d-9a13e835c965-metrics-certs\") pod \"speaker-7k2pt\" (UID: \"dec16985-cd5e-425b-a72d-9a13e835c965\") " pod="metallb-system/speaker-7k2pt" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.066836 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbcde851-76fe-4be1-ae50-ed62ebcc75a3-metrics-certs\") pod \"controller-f8648f98b-bq2xc\" (UID: \"dbcde851-76fe-4be1-ae50-ed62ebcc75a3\") " pod="metallb-system/controller-f8648f98b-bq2xc" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.067620 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dbcde851-76fe-4be1-ae50-ed62ebcc75a3-cert\") pod \"controller-f8648f98b-bq2xc\" (UID: \"dbcde851-76fe-4be1-ae50-ed62ebcc75a3\") " pod="metallb-system/controller-f8648f98b-bq2xc" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.071669 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwvbq\" (UniqueName: \"kubernetes.io/projected/dbcde851-76fe-4be1-ae50-ed62ebcc75a3-kube-api-access-zwvbq\") pod \"controller-f8648f98b-bq2xc\" (UID: \"dbcde851-76fe-4be1-ae50-ed62ebcc75a3\") " pod="metallb-system/controller-f8648f98b-bq2xc" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.079281 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvwhq\" (UniqueName: \"kubernetes.io/projected/dec16985-cd5e-425b-a72d-9a13e835c965-kube-api-access-cvwhq\") pod \"speaker-7k2pt\" (UID: \"dec16985-cd5e-425b-a72d-9a13e835c965\") " pod="metallb-system/speaker-7k2pt" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.102686 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.222874 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-bq2xc" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.398776 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq"] Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.478106 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-metrics-certs\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.486001 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf-metrics-certs\") pod \"frr-k8s-7n2sf\" (UID: \"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf\") " pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.522911 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-bq2xc"] Dec 03 11:08:49 crc kubenswrapper[4756]: W1203 11:08:49.529212 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbcde851_76fe_4be1_ae50_ed62ebcc75a3.slice/crio-923dd610c78a285507ed2ae5b7e17b22b66678cebe2c9193e4d6d1cd79ad9e78 WatchSource:0}: Error finding container 923dd610c78a285507ed2ae5b7e17b22b66678cebe2c9193e4d6d1cd79ad9e78: Status 404 returned error can't find the container with id 923dd610c78a285507ed2ae5b7e17b22b66678cebe2c9193e4d6d1cd79ad9e78 Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.580071 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dec16985-cd5e-425b-a72d-9a13e835c965-memberlist\") pod \"speaker-7k2pt\" (UID: \"dec16985-cd5e-425b-a72d-9a13e835c965\") " pod="metallb-system/speaker-7k2pt" Dec 03 11:08:49 crc kubenswrapper[4756]: E1203 11:08:49.580737 4756 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 03 11:08:49 crc kubenswrapper[4756]: E1203 11:08:49.580811 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dec16985-cd5e-425b-a72d-9a13e835c965-memberlist podName:dec16985-cd5e-425b-a72d-9a13e835c965 nodeName:}" failed. No retries permitted until 2025-12-03 11:08:50.580791629 +0000 UTC m=+941.610792873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/dec16985-cd5e-425b-a72d-9a13e835c965-memberlist") pod "speaker-7k2pt" (UID: "dec16985-cd5e-425b-a72d-9a13e835c965") : secret "metallb-memberlist" not found Dec 03 11:08:49 crc kubenswrapper[4756]: I1203 11:08:49.689327 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:08:50 crc kubenswrapper[4756]: I1203 11:08:50.131356 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-bq2xc" event={"ID":"dbcde851-76fe-4be1-ae50-ed62ebcc75a3","Type":"ContainerStarted","Data":"014f55f3ba5a7a601cd93d133bff09b017678aa0616a643151350bc29660babf"} Dec 03 11:08:50 crc kubenswrapper[4756]: I1203 11:08:50.131431 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-bq2xc" event={"ID":"dbcde851-76fe-4be1-ae50-ed62ebcc75a3","Type":"ContainerStarted","Data":"3979b1145b3f9671e614c4eb6dbcf2527de265c361f78d961edbdbaadb09df16"} Dec 03 11:08:50 crc kubenswrapper[4756]: I1203 11:08:50.131447 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-bq2xc" event={"ID":"dbcde851-76fe-4be1-ae50-ed62ebcc75a3","Type":"ContainerStarted","Data":"923dd610c78a285507ed2ae5b7e17b22b66678cebe2c9193e4d6d1cd79ad9e78"} Dec 03 11:08:50 crc kubenswrapper[4756]: I1203 11:08:50.131536 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-bq2xc" Dec 03 11:08:50 crc kubenswrapper[4756]: I1203 11:08:50.133163 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7n2sf" event={"ID":"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf","Type":"ContainerStarted","Data":"e57cb363cf129fc8434c69488ab62258b2d1ac37ef2906162cecf3d7968a6d0d"} Dec 03 11:08:50 crc kubenswrapper[4756]: I1203 11:08:50.134416 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq" event={"ID":"4686b721-17d6-4951-9107-81ba7c5f1658","Type":"ContainerStarted","Data":"8cc29b1f944f1d0f4230da255dce06acfe108974647a878ec43ea7030a126434"} Dec 03 11:08:50 crc kubenswrapper[4756]: I1203 11:08:50.157508 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-bq2xc" podStartSLOduration=2.157475856 podStartE2EDuration="2.157475856s" podCreationTimestamp="2025-12-03 11:08:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:08:50.149417562 +0000 UTC m=+941.179418816" watchObservedRunningTime="2025-12-03 11:08:50.157475856 +0000 UTC m=+941.187477110" Dec 03 11:08:50 crc kubenswrapper[4756]: I1203 11:08:50.596278 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dec16985-cd5e-425b-a72d-9a13e835c965-memberlist\") pod \"speaker-7k2pt\" (UID: \"dec16985-cd5e-425b-a72d-9a13e835c965\") " pod="metallb-system/speaker-7k2pt" Dec 03 11:08:50 crc kubenswrapper[4756]: I1203 11:08:50.602479 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dec16985-cd5e-425b-a72d-9a13e835c965-memberlist\") pod \"speaker-7k2pt\" (UID: \"dec16985-cd5e-425b-a72d-9a13e835c965\") " pod="metallb-system/speaker-7k2pt" Dec 03 11:08:50 crc kubenswrapper[4756]: I1203 11:08:50.697294 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7k2pt" Dec 03 11:08:51 crc kubenswrapper[4756]: I1203 11:08:51.147651 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7k2pt" event={"ID":"dec16985-cd5e-425b-a72d-9a13e835c965","Type":"ContainerStarted","Data":"687df1058e3754528ddab4e68036aa9e5199cf866ad3bbf9168d6fc417943a3a"} Dec 03 11:08:51 crc kubenswrapper[4756]: I1203 11:08:51.147716 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7k2pt" event={"ID":"dec16985-cd5e-425b-a72d-9a13e835c965","Type":"ContainerStarted","Data":"e9c5a9622367e36719c4cb6281b7523367a13cde893a626d81962278f041ae8c"} Dec 03 11:08:52 crc kubenswrapper[4756]: I1203 11:08:52.157726 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7k2pt" event={"ID":"dec16985-cd5e-425b-a72d-9a13e835c965","Type":"ContainerStarted","Data":"58cd5afc53f1875119ff66c2ed1e79fdd99880b9ffd17744d1e4a009113652c2"} Dec 03 11:08:52 crc kubenswrapper[4756]: I1203 11:08:52.158533 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7k2pt" Dec 03 11:08:52 crc kubenswrapper[4756]: I1203 11:08:52.184431 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7k2pt" podStartSLOduration=4.184385464 podStartE2EDuration="4.184385464s" podCreationTimestamp="2025-12-03 11:08:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:08:52.180495776 +0000 UTC m=+943.210497020" watchObservedRunningTime="2025-12-03 11:08:52.184385464 +0000 UTC m=+943.214386708" Dec 03 11:08:52 crc kubenswrapper[4756]: I1203 11:08:52.607058 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:08:52 crc kubenswrapper[4756]: I1203 11:08:52.607132 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:08:59 crc kubenswrapper[4756]: I1203 11:08:59.227541 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-bq2xc" Dec 03 11:09:00 crc kubenswrapper[4756]: I1203 11:09:00.712780 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7k2pt" Dec 03 11:09:03 crc kubenswrapper[4756]: I1203 11:09:03.417329 4756 generic.go:334] "Generic (PLEG): container finished" podID="ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf" containerID="dd2e478fdce13eb8b25f5570c8be75d4d072928a8b2d6b729ab0e751bf5a0e3d" exitCode=0 Dec 03 11:09:03 crc kubenswrapper[4756]: I1203 11:09:03.417695 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7n2sf" event={"ID":"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf","Type":"ContainerDied","Data":"dd2e478fdce13eb8b25f5570c8be75d4d072928a8b2d6b729ab0e751bf5a0e3d"} Dec 03 11:09:03 crc kubenswrapper[4756]: I1203 11:09:03.426550 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq" event={"ID":"4686b721-17d6-4951-9107-81ba7c5f1658","Type":"ContainerStarted","Data":"7ad00c493e3f39c75e1f56fcbe9dc51c3dd8dd6dca59566c461f383c907c845f"} Dec 03 11:09:03 crc kubenswrapper[4756]: I1203 11:09:03.426715 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq" Dec 03 11:09:03 crc kubenswrapper[4756]: I1203 11:09:03.469648 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq" podStartSLOduration=2.5207648430000003 podStartE2EDuration="15.469620356s" podCreationTimestamp="2025-12-03 11:08:48 +0000 UTC" firstStartedPulling="2025-12-03 11:08:49.411191194 +0000 UTC m=+940.441192438" lastFinishedPulling="2025-12-03 11:09:02.360046707 +0000 UTC m=+953.390047951" observedRunningTime="2025-12-03 11:09:03.466946595 +0000 UTC m=+954.496947849" watchObservedRunningTime="2025-12-03 11:09:03.469620356 +0000 UTC m=+954.499621600" Dec 03 11:09:04 crc kubenswrapper[4756]: I1203 11:09:04.440854 4756 generic.go:334] "Generic (PLEG): container finished" podID="ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf" containerID="110635628bf32758db20bd1e850e41e42feb9825eb53df6b5fc8103ac4b44b7a" exitCode=0 Dec 03 11:09:04 crc kubenswrapper[4756]: I1203 11:09:04.440984 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7n2sf" event={"ID":"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf","Type":"ContainerDied","Data":"110635628bf32758db20bd1e850e41e42feb9825eb53df6b5fc8103ac4b44b7a"} Dec 03 11:09:04 crc kubenswrapper[4756]: I1203 11:09:04.586925 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-796sg"] Dec 03 11:09:04 crc kubenswrapper[4756]: I1203 11:09:04.588154 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-796sg" Dec 03 11:09:04 crc kubenswrapper[4756]: I1203 11:09:04.591304 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 03 11:09:04 crc kubenswrapper[4756]: I1203 11:09:04.593923 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 03 11:09:04 crc kubenswrapper[4756]: I1203 11:09:04.595085 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-mcm9w" Dec 03 11:09:04 crc kubenswrapper[4756]: I1203 11:09:04.602302 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-796sg"] Dec 03 11:09:04 crc kubenswrapper[4756]: I1203 11:09:04.677053 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxzn4\" (UniqueName: \"kubernetes.io/projected/8e645de5-3eb5-489f-9786-a61197146117-kube-api-access-pxzn4\") pod \"openstack-operator-index-796sg\" (UID: \"8e645de5-3eb5-489f-9786-a61197146117\") " pod="openstack-operators/openstack-operator-index-796sg" Dec 03 11:09:04 crc kubenswrapper[4756]: I1203 11:09:04.779508 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxzn4\" (UniqueName: \"kubernetes.io/projected/8e645de5-3eb5-489f-9786-a61197146117-kube-api-access-pxzn4\") pod \"openstack-operator-index-796sg\" (UID: \"8e645de5-3eb5-489f-9786-a61197146117\") " pod="openstack-operators/openstack-operator-index-796sg" Dec 03 11:09:04 crc kubenswrapper[4756]: I1203 11:09:04.804382 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxzn4\" (UniqueName: \"kubernetes.io/projected/8e645de5-3eb5-489f-9786-a61197146117-kube-api-access-pxzn4\") pod \"openstack-operator-index-796sg\" (UID: \"8e645de5-3eb5-489f-9786-a61197146117\") " pod="openstack-operators/openstack-operator-index-796sg" Dec 03 11:09:04 crc kubenswrapper[4756]: I1203 11:09:04.955706 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-796sg" Dec 03 11:09:05 crc kubenswrapper[4756]: I1203 11:09:05.454189 4756 generic.go:334] "Generic (PLEG): container finished" podID="ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf" containerID="ec3018d4183336248280919741bdc09c07b79bcff36016c60f9f6359245d6856" exitCode=0 Dec 03 11:09:05 crc kubenswrapper[4756]: I1203 11:09:05.454311 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7n2sf" event={"ID":"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf","Type":"ContainerDied","Data":"ec3018d4183336248280919741bdc09c07b79bcff36016c60f9f6359245d6856"} Dec 03 11:09:05 crc kubenswrapper[4756]: I1203 11:09:05.485151 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-796sg"] Dec 03 11:09:05 crc kubenswrapper[4756]: W1203 11:09:05.492858 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e645de5_3eb5_489f_9786_a61197146117.slice/crio-37d387a3caecf719913f37e32ac09c33e82f6deb235976875352f307eaf2da05 WatchSource:0}: Error finding container 37d387a3caecf719913f37e32ac09c33e82f6deb235976875352f307eaf2da05: Status 404 returned error can't find the container with id 37d387a3caecf719913f37e32ac09c33e82f6deb235976875352f307eaf2da05 Dec 03 11:09:06 crc kubenswrapper[4756]: I1203 11:09:06.485999 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-796sg" event={"ID":"8e645de5-3eb5-489f-9786-a61197146117","Type":"ContainerStarted","Data":"37d387a3caecf719913f37e32ac09c33e82f6deb235976875352f307eaf2da05"} Dec 03 11:09:08 crc kubenswrapper[4756]: I1203 11:09:08.162405 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-796sg"] Dec 03 11:09:08 crc kubenswrapper[4756]: I1203 11:09:08.768033 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5r2k8"] Dec 03 11:09:08 crc kubenswrapper[4756]: I1203 11:09:08.769278 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5r2k8" Dec 03 11:09:08 crc kubenswrapper[4756]: I1203 11:09:08.779844 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5r2k8"] Dec 03 11:09:08 crc kubenswrapper[4756]: I1203 11:09:08.839921 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwnwx\" (UniqueName: \"kubernetes.io/projected/1b7d149f-f0e8-4a81-b7c6-e581a6f12d06-kube-api-access-gwnwx\") pod \"openstack-operator-index-5r2k8\" (UID: \"1b7d149f-f0e8-4a81-b7c6-e581a6f12d06\") " pod="openstack-operators/openstack-operator-index-5r2k8" Dec 03 11:09:08 crc kubenswrapper[4756]: I1203 11:09:08.941333 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwnwx\" (UniqueName: \"kubernetes.io/projected/1b7d149f-f0e8-4a81-b7c6-e581a6f12d06-kube-api-access-gwnwx\") pod \"openstack-operator-index-5r2k8\" (UID: \"1b7d149f-f0e8-4a81-b7c6-e581a6f12d06\") " pod="openstack-operators/openstack-operator-index-5r2k8" Dec 03 11:09:08 crc kubenswrapper[4756]: I1203 11:09:08.965752 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwnwx\" (UniqueName: \"kubernetes.io/projected/1b7d149f-f0e8-4a81-b7c6-e581a6f12d06-kube-api-access-gwnwx\") pod \"openstack-operator-index-5r2k8\" (UID: \"1b7d149f-f0e8-4a81-b7c6-e581a6f12d06\") " pod="openstack-operators/openstack-operator-index-5r2k8" Dec 03 11:09:09 crc kubenswrapper[4756]: I1203 11:09:09.152571 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5r2k8" Dec 03 11:09:09 crc kubenswrapper[4756]: I1203 11:09:09.508330 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-796sg" event={"ID":"8e645de5-3eb5-489f-9786-a61197146117","Type":"ContainerStarted","Data":"627daf00a169722d94c9aedf00d56517ace611aae4b8585a621582399146a0e7"} Dec 03 11:09:09 crc kubenswrapper[4756]: I1203 11:09:09.510077 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-796sg" podUID="8e645de5-3eb5-489f-9786-a61197146117" containerName="registry-server" containerID="cri-o://627daf00a169722d94c9aedf00d56517ace611aae4b8585a621582399146a0e7" gracePeriod=2 Dec 03 11:09:09 crc kubenswrapper[4756]: I1203 11:09:09.521896 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7n2sf" event={"ID":"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf","Type":"ContainerStarted","Data":"d419c228abba003630404e9b151fec4c0af8bb0965d2e9347c4ff7ef8b04cd1f"} Dec 03 11:09:09 crc kubenswrapper[4756]: I1203 11:09:09.521978 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7n2sf" event={"ID":"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf","Type":"ContainerStarted","Data":"a60e1b4ee89081532d374f825ecb3ffba91b1ab98d257dfd659348d1cbee69df"} Dec 03 11:09:09 crc kubenswrapper[4756]: I1203 11:09:09.521994 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7n2sf" event={"ID":"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf","Type":"ContainerStarted","Data":"3785737e360cb390d07c5de6cff0450d50138f76cde921c1b964007a05ff27b6"} Dec 03 11:09:09 crc kubenswrapper[4756]: I1203 11:09:09.537611 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-796sg" podStartSLOduration=2.467485156 podStartE2EDuration="5.537588393s" podCreationTimestamp="2025-12-03 11:09:04 +0000 UTC" firstStartedPulling="2025-12-03 11:09:05.495657397 +0000 UTC m=+956.525658641" lastFinishedPulling="2025-12-03 11:09:08.565760634 +0000 UTC m=+959.595761878" observedRunningTime="2025-12-03 11:09:09.527858869 +0000 UTC m=+960.557860133" watchObservedRunningTime="2025-12-03 11:09:09.537588393 +0000 UTC m=+960.567589637" Dec 03 11:09:09 crc kubenswrapper[4756]: I1203 11:09:09.572320 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5r2k8"] Dec 03 11:09:09 crc kubenswrapper[4756]: W1203 11:09:09.579142 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b7d149f_f0e8_4a81_b7c6_e581a6f12d06.slice/crio-fc085abc24cb7974152acb511a561b0e894fd166bbdae740d9e5f5b402c88a66 WatchSource:0}: Error finding container fc085abc24cb7974152acb511a561b0e894fd166bbdae740d9e5f5b402c88a66: Status 404 returned error can't find the container with id fc085abc24cb7974152acb511a561b0e894fd166bbdae740d9e5f5b402c88a66 Dec 03 11:09:09 crc kubenswrapper[4756]: I1203 11:09:09.979300 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-796sg" Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.026148 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxzn4\" (UniqueName: \"kubernetes.io/projected/8e645de5-3eb5-489f-9786-a61197146117-kube-api-access-pxzn4\") pod \"8e645de5-3eb5-489f-9786-a61197146117\" (UID: \"8e645de5-3eb5-489f-9786-a61197146117\") " Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.032813 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e645de5-3eb5-489f-9786-a61197146117-kube-api-access-pxzn4" (OuterVolumeSpecName: "kube-api-access-pxzn4") pod "8e645de5-3eb5-489f-9786-a61197146117" (UID: "8e645de5-3eb5-489f-9786-a61197146117"). InnerVolumeSpecName "kube-api-access-pxzn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.128120 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxzn4\" (UniqueName: \"kubernetes.io/projected/8e645de5-3eb5-489f-9786-a61197146117-kube-api-access-pxzn4\") on node \"crc\" DevicePath \"\"" Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.531873 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5r2k8" event={"ID":"1b7d149f-f0e8-4a81-b7c6-e581a6f12d06","Type":"ContainerStarted","Data":"1b12001d04db2bfc726fab14476a8b2e536caf623a2b1660980d205e1d40468e"} Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.531945 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5r2k8" event={"ID":"1b7d149f-f0e8-4a81-b7c6-e581a6f12d06","Type":"ContainerStarted","Data":"fc085abc24cb7974152acb511a561b0e894fd166bbdae740d9e5f5b402c88a66"} Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.537557 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7n2sf" event={"ID":"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf","Type":"ContainerStarted","Data":"8346fed845411f5ec23750ec4d5fd727ad77d4291012a618d400b66d861427ad"} Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.538025 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7n2sf" event={"ID":"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf","Type":"ContainerStarted","Data":"687a46b1c8b27cdacf37063843367284fb2c78717d9c0a54013f5ae0f3e7d55a"} Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.538069 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.538098 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7n2sf" event={"ID":"ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf","Type":"ContainerStarted","Data":"56473f15aea4f5573d4fd19af94edb3434bc309bf035feb2784b4cc2538adb4c"} Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.539198 4756 generic.go:334] "Generic (PLEG): container finished" podID="8e645de5-3eb5-489f-9786-a61197146117" containerID="627daf00a169722d94c9aedf00d56517ace611aae4b8585a621582399146a0e7" exitCode=0 Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.539250 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-796sg" Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.539267 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-796sg" event={"ID":"8e645de5-3eb5-489f-9786-a61197146117","Type":"ContainerDied","Data":"627daf00a169722d94c9aedf00d56517ace611aae4b8585a621582399146a0e7"} Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.539339 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-796sg" event={"ID":"8e645de5-3eb5-489f-9786-a61197146117","Type":"ContainerDied","Data":"37d387a3caecf719913f37e32ac09c33e82f6deb235976875352f307eaf2da05"} Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.539387 4756 scope.go:117] "RemoveContainer" containerID="627daf00a169722d94c9aedf00d56517ace611aae4b8585a621582399146a0e7" Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.560398 4756 scope.go:117] "RemoveContainer" containerID="627daf00a169722d94c9aedf00d56517ace611aae4b8585a621582399146a0e7" Dec 03 11:09:10 crc kubenswrapper[4756]: E1203 11:09:10.561892 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"627daf00a169722d94c9aedf00d56517ace611aae4b8585a621582399146a0e7\": container with ID starting with 627daf00a169722d94c9aedf00d56517ace611aae4b8585a621582399146a0e7 not found: ID does not exist" containerID="627daf00a169722d94c9aedf00d56517ace611aae4b8585a621582399146a0e7" Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.561976 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"627daf00a169722d94c9aedf00d56517ace611aae4b8585a621582399146a0e7"} err="failed to get container status \"627daf00a169722d94c9aedf00d56517ace611aae4b8585a621582399146a0e7\": rpc error: code = NotFound desc = could not find container \"627daf00a169722d94c9aedf00d56517ace611aae4b8585a621582399146a0e7\": container with ID starting with 627daf00a169722d94c9aedf00d56517ace611aae4b8585a621582399146a0e7 not found: ID does not exist" Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.563551 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5r2k8" podStartSLOduration=2.425548183 podStartE2EDuration="2.563508999s" podCreationTimestamp="2025-12-03 11:09:08 +0000 UTC" firstStartedPulling="2025-12-03 11:09:09.585165263 +0000 UTC m=+960.615166507" lastFinishedPulling="2025-12-03 11:09:09.723126079 +0000 UTC m=+960.753127323" observedRunningTime="2025-12-03 11:09:10.555580889 +0000 UTC m=+961.585582143" watchObservedRunningTime="2025-12-03 11:09:10.563508999 +0000 UTC m=+961.593510283" Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.594508 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-7n2sf" podStartSLOduration=10.030156945 podStartE2EDuration="22.594479877s" podCreationTimestamp="2025-12-03 11:08:48 +0000 UTC" firstStartedPulling="2025-12-03 11:08:49.81868437 +0000 UTC m=+940.848685614" lastFinishedPulling="2025-12-03 11:09:02.383007302 +0000 UTC m=+953.413008546" observedRunningTime="2025-12-03 11:09:10.589236768 +0000 UTC m=+961.619238032" watchObservedRunningTime="2025-12-03 11:09:10.594479877 +0000 UTC m=+961.624481121" Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.615302 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-796sg"] Dec 03 11:09:10 crc kubenswrapper[4756]: I1203 11:09:10.619575 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-796sg"] Dec 03 11:09:11 crc kubenswrapper[4756]: I1203 11:09:11.242304 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e645de5-3eb5-489f-9786-a61197146117" path="/var/lib/kubelet/pods/8e645de5-3eb5-489f-9786-a61197146117/volumes" Dec 03 11:09:14 crc kubenswrapper[4756]: I1203 11:09:14.689710 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:09:14 crc kubenswrapper[4756]: I1203 11:09:14.726818 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:09:19 crc kubenswrapper[4756]: I1203 11:09:19.108495 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-v6jcq" Dec 03 11:09:19 crc kubenswrapper[4756]: I1203 11:09:19.153617 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5r2k8" Dec 03 11:09:19 crc kubenswrapper[4756]: I1203 11:09:19.154091 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5r2k8" Dec 03 11:09:19 crc kubenswrapper[4756]: I1203 11:09:19.181303 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5r2k8" Dec 03 11:09:19 crc kubenswrapper[4756]: I1203 11:09:19.626510 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5r2k8" Dec 03 11:09:19 crc kubenswrapper[4756]: I1203 11:09:19.695469 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-7n2sf" Dec 03 11:09:22 crc kubenswrapper[4756]: I1203 11:09:22.607740 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:09:22 crc kubenswrapper[4756]: I1203 11:09:22.607848 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:09:24 crc kubenswrapper[4756]: I1203 11:09:24.826148 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9"] Dec 03 11:09:24 crc kubenswrapper[4756]: E1203 11:09:24.826796 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e645de5-3eb5-489f-9786-a61197146117" containerName="registry-server" Dec 03 11:09:24 crc kubenswrapper[4756]: I1203 11:09:24.826812 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e645de5-3eb5-489f-9786-a61197146117" containerName="registry-server" Dec 03 11:09:24 crc kubenswrapper[4756]: I1203 11:09:24.826962 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e645de5-3eb5-489f-9786-a61197146117" containerName="registry-server" Dec 03 11:09:24 crc kubenswrapper[4756]: I1203 11:09:24.827913 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" Dec 03 11:09:24 crc kubenswrapper[4756]: I1203 11:09:24.831413 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-mhlds" Dec 03 11:09:24 crc kubenswrapper[4756]: I1203 11:09:24.841264 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9"] Dec 03 11:09:24 crc kubenswrapper[4756]: I1203 11:09:24.879713 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a99ddd7-a877-4ce4-b97c-65350ab2af24-util\") pod \"93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9\" (UID: \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\") " pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" Dec 03 11:09:24 crc kubenswrapper[4756]: I1203 11:09:24.879836 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a99ddd7-a877-4ce4-b97c-65350ab2af24-bundle\") pod \"93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9\" (UID: \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\") " pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" Dec 03 11:09:24 crc kubenswrapper[4756]: I1203 11:09:24.879981 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhfq7\" (UniqueName: \"kubernetes.io/projected/3a99ddd7-a877-4ce4-b97c-65350ab2af24-kube-api-access-xhfq7\") pod \"93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9\" (UID: \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\") " pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" Dec 03 11:09:24 crc kubenswrapper[4756]: I1203 11:09:24.981442 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a99ddd7-a877-4ce4-b97c-65350ab2af24-util\") pod \"93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9\" (UID: \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\") " pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" Dec 03 11:09:24 crc kubenswrapper[4756]: I1203 11:09:24.981515 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a99ddd7-a877-4ce4-b97c-65350ab2af24-bundle\") pod \"93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9\" (UID: \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\") " pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" Dec 03 11:09:24 crc kubenswrapper[4756]: I1203 11:09:24.981590 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhfq7\" (UniqueName: \"kubernetes.io/projected/3a99ddd7-a877-4ce4-b97c-65350ab2af24-kube-api-access-xhfq7\") pod \"93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9\" (UID: \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\") " pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" Dec 03 11:09:24 crc kubenswrapper[4756]: I1203 11:09:24.982519 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a99ddd7-a877-4ce4-b97c-65350ab2af24-util\") pod \"93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9\" (UID: \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\") " pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" Dec 03 11:09:24 crc kubenswrapper[4756]: I1203 11:09:24.983005 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a99ddd7-a877-4ce4-b97c-65350ab2af24-bundle\") pod \"93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9\" (UID: \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\") " pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" Dec 03 11:09:25 crc kubenswrapper[4756]: I1203 11:09:25.002571 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhfq7\" (UniqueName: \"kubernetes.io/projected/3a99ddd7-a877-4ce4-b97c-65350ab2af24-kube-api-access-xhfq7\") pod \"93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9\" (UID: \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\") " pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" Dec 03 11:09:25 crc kubenswrapper[4756]: I1203 11:09:25.148032 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" Dec 03 11:09:25 crc kubenswrapper[4756]: I1203 11:09:25.768447 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9"] Dec 03 11:09:25 crc kubenswrapper[4756]: W1203 11:09:25.775887 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a99ddd7_a877_4ce4_b97c_65350ab2af24.slice/crio-5ae3ad44689f3f5749db4cfb52e6ae02ac9237611ddc638f143fb76e2b6242e9 WatchSource:0}: Error finding container 5ae3ad44689f3f5749db4cfb52e6ae02ac9237611ddc638f143fb76e2b6242e9: Status 404 returned error can't find the container with id 5ae3ad44689f3f5749db4cfb52e6ae02ac9237611ddc638f143fb76e2b6242e9 Dec 03 11:09:26 crc kubenswrapper[4756]: I1203 11:09:26.710326 4756 generic.go:334] "Generic (PLEG): container finished" podID="3a99ddd7-a877-4ce4-b97c-65350ab2af24" containerID="89005f7ee09a63221e97f36f2a499b4523e21cd9e306206bab50340df171b70a" exitCode=0 Dec 03 11:09:26 crc kubenswrapper[4756]: I1203 11:09:26.710430 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" event={"ID":"3a99ddd7-a877-4ce4-b97c-65350ab2af24","Type":"ContainerDied","Data":"89005f7ee09a63221e97f36f2a499b4523e21cd9e306206bab50340df171b70a"} Dec 03 11:09:26 crc kubenswrapper[4756]: I1203 11:09:26.710658 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" event={"ID":"3a99ddd7-a877-4ce4-b97c-65350ab2af24","Type":"ContainerStarted","Data":"5ae3ad44689f3f5749db4cfb52e6ae02ac9237611ddc638f143fb76e2b6242e9"} Dec 03 11:09:27 crc kubenswrapper[4756]: I1203 11:09:27.721573 4756 generic.go:334] "Generic (PLEG): container finished" podID="3a99ddd7-a877-4ce4-b97c-65350ab2af24" containerID="c62aa714f3d2637920df521967b2d2517f0752a37d70812c3f0d1ffeb1fa04e7" exitCode=0 Dec 03 11:09:27 crc kubenswrapper[4756]: I1203 11:09:27.721635 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" event={"ID":"3a99ddd7-a877-4ce4-b97c-65350ab2af24","Type":"ContainerDied","Data":"c62aa714f3d2637920df521967b2d2517f0752a37d70812c3f0d1ffeb1fa04e7"} Dec 03 11:09:28 crc kubenswrapper[4756]: I1203 11:09:28.731177 4756 generic.go:334] "Generic (PLEG): container finished" podID="3a99ddd7-a877-4ce4-b97c-65350ab2af24" containerID="552090fbeeae0bdc3a37479fcf5dbe498f6e0aa80e55ae390b08ed6e47d1235b" exitCode=0 Dec 03 11:09:28 crc kubenswrapper[4756]: I1203 11:09:28.731303 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" event={"ID":"3a99ddd7-a877-4ce4-b97c-65350ab2af24","Type":"ContainerDied","Data":"552090fbeeae0bdc3a37479fcf5dbe498f6e0aa80e55ae390b08ed6e47d1235b"} Dec 03 11:09:29 crc kubenswrapper[4756]: I1203 11:09:29.983747 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" Dec 03 11:09:30 crc kubenswrapper[4756]: I1203 11:09:30.061735 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a99ddd7-a877-4ce4-b97c-65350ab2af24-bundle\") pod \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\" (UID: \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\") " Dec 03 11:09:30 crc kubenswrapper[4756]: I1203 11:09:30.061940 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a99ddd7-a877-4ce4-b97c-65350ab2af24-util\") pod \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\" (UID: \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\") " Dec 03 11:09:30 crc kubenswrapper[4756]: I1203 11:09:30.062083 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhfq7\" (UniqueName: \"kubernetes.io/projected/3a99ddd7-a877-4ce4-b97c-65350ab2af24-kube-api-access-xhfq7\") pod \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\" (UID: \"3a99ddd7-a877-4ce4-b97c-65350ab2af24\") " Dec 03 11:09:30 crc kubenswrapper[4756]: I1203 11:09:30.062813 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a99ddd7-a877-4ce4-b97c-65350ab2af24-bundle" (OuterVolumeSpecName: "bundle") pod "3a99ddd7-a877-4ce4-b97c-65350ab2af24" (UID: "3a99ddd7-a877-4ce4-b97c-65350ab2af24"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:09:30 crc kubenswrapper[4756]: I1203 11:09:30.068770 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a99ddd7-a877-4ce4-b97c-65350ab2af24-kube-api-access-xhfq7" (OuterVolumeSpecName: "kube-api-access-xhfq7") pod "3a99ddd7-a877-4ce4-b97c-65350ab2af24" (UID: "3a99ddd7-a877-4ce4-b97c-65350ab2af24"). InnerVolumeSpecName "kube-api-access-xhfq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:09:30 crc kubenswrapper[4756]: I1203 11:09:30.077222 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a99ddd7-a877-4ce4-b97c-65350ab2af24-util" (OuterVolumeSpecName: "util") pod "3a99ddd7-a877-4ce4-b97c-65350ab2af24" (UID: "3a99ddd7-a877-4ce4-b97c-65350ab2af24"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:09:30 crc kubenswrapper[4756]: I1203 11:09:30.163599 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhfq7\" (UniqueName: \"kubernetes.io/projected/3a99ddd7-a877-4ce4-b97c-65350ab2af24-kube-api-access-xhfq7\") on node \"crc\" DevicePath \"\"" Dec 03 11:09:30 crc kubenswrapper[4756]: I1203 11:09:30.163641 4756 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a99ddd7-a877-4ce4-b97c-65350ab2af24-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:09:30 crc kubenswrapper[4756]: I1203 11:09:30.163653 4756 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a99ddd7-a877-4ce4-b97c-65350ab2af24-util\") on node \"crc\" DevicePath \"\"" Dec 03 11:09:30 crc kubenswrapper[4756]: I1203 11:09:30.749259 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" event={"ID":"3a99ddd7-a877-4ce4-b97c-65350ab2af24","Type":"ContainerDied","Data":"5ae3ad44689f3f5749db4cfb52e6ae02ac9237611ddc638f143fb76e2b6242e9"} Dec 03 11:09:30 crc kubenswrapper[4756]: I1203 11:09:30.749349 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ae3ad44689f3f5749db4cfb52e6ae02ac9237611ddc638f143fb76e2b6242e9" Dec 03 11:09:30 crc kubenswrapper[4756]: I1203 11:09:30.749370 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9" Dec 03 11:09:38 crc kubenswrapper[4756]: I1203 11:09:38.006629 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-779dc79ddf-bn5rj"] Dec 03 11:09:38 crc kubenswrapper[4756]: E1203 11:09:38.010945 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a99ddd7-a877-4ce4-b97c-65350ab2af24" containerName="pull" Dec 03 11:09:38 crc kubenswrapper[4756]: I1203 11:09:38.010991 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a99ddd7-a877-4ce4-b97c-65350ab2af24" containerName="pull" Dec 03 11:09:38 crc kubenswrapper[4756]: E1203 11:09:38.011012 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a99ddd7-a877-4ce4-b97c-65350ab2af24" containerName="extract" Dec 03 11:09:38 crc kubenswrapper[4756]: I1203 11:09:38.011021 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a99ddd7-a877-4ce4-b97c-65350ab2af24" containerName="extract" Dec 03 11:09:38 crc kubenswrapper[4756]: E1203 11:09:38.011035 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a99ddd7-a877-4ce4-b97c-65350ab2af24" containerName="util" Dec 03 11:09:38 crc kubenswrapper[4756]: I1203 11:09:38.011042 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a99ddd7-a877-4ce4-b97c-65350ab2af24" containerName="util" Dec 03 11:09:38 crc kubenswrapper[4756]: I1203 11:09:38.011212 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a99ddd7-a877-4ce4-b97c-65350ab2af24" containerName="extract" Dec 03 11:09:38 crc kubenswrapper[4756]: I1203 11:09:38.011836 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-779dc79ddf-bn5rj" Dec 03 11:09:38 crc kubenswrapper[4756]: I1203 11:09:38.014595 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-m4s8w" Dec 03 11:09:38 crc kubenswrapper[4756]: I1203 11:09:38.043697 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-779dc79ddf-bn5rj"] Dec 03 11:09:38 crc kubenswrapper[4756]: I1203 11:09:38.077709 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhz4p\" (UniqueName: \"kubernetes.io/projected/3ae84629-3d85-49fb-a3d9-93c766c1be75-kube-api-access-rhz4p\") pod \"openstack-operator-controller-operator-779dc79ddf-bn5rj\" (UID: \"3ae84629-3d85-49fb-a3d9-93c766c1be75\") " pod="openstack-operators/openstack-operator-controller-operator-779dc79ddf-bn5rj" Dec 03 11:09:38 crc kubenswrapper[4756]: I1203 11:09:38.178918 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhz4p\" (UniqueName: \"kubernetes.io/projected/3ae84629-3d85-49fb-a3d9-93c766c1be75-kube-api-access-rhz4p\") pod \"openstack-operator-controller-operator-779dc79ddf-bn5rj\" (UID: \"3ae84629-3d85-49fb-a3d9-93c766c1be75\") " pod="openstack-operators/openstack-operator-controller-operator-779dc79ddf-bn5rj" Dec 03 11:09:38 crc kubenswrapper[4756]: I1203 11:09:38.211335 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhz4p\" (UniqueName: \"kubernetes.io/projected/3ae84629-3d85-49fb-a3d9-93c766c1be75-kube-api-access-rhz4p\") pod \"openstack-operator-controller-operator-779dc79ddf-bn5rj\" (UID: \"3ae84629-3d85-49fb-a3d9-93c766c1be75\") " pod="openstack-operators/openstack-operator-controller-operator-779dc79ddf-bn5rj" Dec 03 11:09:38 crc kubenswrapper[4756]: I1203 11:09:38.333606 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-779dc79ddf-bn5rj" Dec 03 11:09:38 crc kubenswrapper[4756]: I1203 11:09:38.814321 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-779dc79ddf-bn5rj"] Dec 03 11:09:38 crc kubenswrapper[4756]: I1203 11:09:38.968806 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-779dc79ddf-bn5rj" event={"ID":"3ae84629-3d85-49fb-a3d9-93c766c1be75","Type":"ContainerStarted","Data":"5ffd7d9520de2aaa40034ffda921894e5a378fe9296e787e062ecd3395cc6365"} Dec 03 11:09:41 crc kubenswrapper[4756]: I1203 11:09:41.261598 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9sskn"] Dec 03 11:09:41 crc kubenswrapper[4756]: I1203 11:09:41.265496 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sskn"] Dec 03 11:09:41 crc kubenswrapper[4756]: I1203 11:09:41.265713 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:09:41 crc kubenswrapper[4756]: I1203 11:09:41.485822 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqq8n\" (UniqueName: \"kubernetes.io/projected/7c727842-0551-4545-a4e6-91db0d4e2f5e-kube-api-access-fqq8n\") pod \"redhat-marketplace-9sskn\" (UID: \"7c727842-0551-4545-a4e6-91db0d4e2f5e\") " pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:09:41 crc kubenswrapper[4756]: I1203 11:09:41.486230 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c727842-0551-4545-a4e6-91db0d4e2f5e-utilities\") pod \"redhat-marketplace-9sskn\" (UID: \"7c727842-0551-4545-a4e6-91db0d4e2f5e\") " pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:09:41 crc kubenswrapper[4756]: I1203 11:09:41.486318 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c727842-0551-4545-a4e6-91db0d4e2f5e-catalog-content\") pod \"redhat-marketplace-9sskn\" (UID: \"7c727842-0551-4545-a4e6-91db0d4e2f5e\") " pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:09:41 crc kubenswrapper[4756]: I1203 11:09:41.587384 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqq8n\" (UniqueName: \"kubernetes.io/projected/7c727842-0551-4545-a4e6-91db0d4e2f5e-kube-api-access-fqq8n\") pod \"redhat-marketplace-9sskn\" (UID: \"7c727842-0551-4545-a4e6-91db0d4e2f5e\") " pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:09:41 crc kubenswrapper[4756]: I1203 11:09:41.587482 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c727842-0551-4545-a4e6-91db0d4e2f5e-utilities\") pod \"redhat-marketplace-9sskn\" (UID: \"7c727842-0551-4545-a4e6-91db0d4e2f5e\") " pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:09:41 crc kubenswrapper[4756]: I1203 11:09:41.587531 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c727842-0551-4545-a4e6-91db0d4e2f5e-catalog-content\") pod \"redhat-marketplace-9sskn\" (UID: \"7c727842-0551-4545-a4e6-91db0d4e2f5e\") " pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:09:41 crc kubenswrapper[4756]: I1203 11:09:41.588190 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c727842-0551-4545-a4e6-91db0d4e2f5e-catalog-content\") pod \"redhat-marketplace-9sskn\" (UID: \"7c727842-0551-4545-a4e6-91db0d4e2f5e\") " pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:09:41 crc kubenswrapper[4756]: I1203 11:09:41.588562 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c727842-0551-4545-a4e6-91db0d4e2f5e-utilities\") pod \"redhat-marketplace-9sskn\" (UID: \"7c727842-0551-4545-a4e6-91db0d4e2f5e\") " pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:09:41 crc kubenswrapper[4756]: I1203 11:09:41.629675 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqq8n\" (UniqueName: \"kubernetes.io/projected/7c727842-0551-4545-a4e6-91db0d4e2f5e-kube-api-access-fqq8n\") pod \"redhat-marketplace-9sskn\" (UID: \"7c727842-0551-4545-a4e6-91db0d4e2f5e\") " pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:09:41 crc kubenswrapper[4756]: I1203 11:09:41.689719 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:09:46 crc kubenswrapper[4756]: I1203 11:09:46.171142 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sskn"] Dec 03 11:09:47 crc kubenswrapper[4756]: I1203 11:09:47.060071 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-779dc79ddf-bn5rj" event={"ID":"3ae84629-3d85-49fb-a3d9-93c766c1be75","Type":"ContainerStarted","Data":"f61f021f3d6a95bb40ac51c36be9c865dbaeec320126f706e72a7150cd818225"} Dec 03 11:09:47 crc kubenswrapper[4756]: I1203 11:09:47.060635 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-779dc79ddf-bn5rj" Dec 03 11:09:47 crc kubenswrapper[4756]: I1203 11:09:47.062677 4756 generic.go:334] "Generic (PLEG): container finished" podID="7c727842-0551-4545-a4e6-91db0d4e2f5e" containerID="209a66dbbb07535ffac7c465ae403e53eba6da93e82c29bea516274d15f8d462" exitCode=0 Dec 03 11:09:47 crc kubenswrapper[4756]: I1203 11:09:47.062717 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sskn" event={"ID":"7c727842-0551-4545-a4e6-91db0d4e2f5e","Type":"ContainerDied","Data":"209a66dbbb07535ffac7c465ae403e53eba6da93e82c29bea516274d15f8d462"} Dec 03 11:09:47 crc kubenswrapper[4756]: I1203 11:09:47.062740 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sskn" event={"ID":"7c727842-0551-4545-a4e6-91db0d4e2f5e","Type":"ContainerStarted","Data":"cd7558470261e94e81b3eb7360b82c4dea35d1106b40e231128384a553ebf357"} Dec 03 11:09:47 crc kubenswrapper[4756]: I1203 11:09:47.123728 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-779dc79ddf-bn5rj" podStartSLOduration=2.7168433050000003 podStartE2EDuration="10.123703097s" podCreationTimestamp="2025-12-03 11:09:37 +0000 UTC" firstStartedPulling="2025-12-03 11:09:38.821731654 +0000 UTC m=+989.851732898" lastFinishedPulling="2025-12-03 11:09:46.228591446 +0000 UTC m=+997.258592690" observedRunningTime="2025-12-03 11:09:47.099992316 +0000 UTC m=+998.129993570" watchObservedRunningTime="2025-12-03 11:09:47.123703097 +0000 UTC m=+998.153704341" Dec 03 11:09:48 crc kubenswrapper[4756]: I1203 11:09:48.089991 4756 generic.go:334] "Generic (PLEG): container finished" podID="7c727842-0551-4545-a4e6-91db0d4e2f5e" containerID="7d3a51f25b19ce96d39419516c3ae4445377e0ed488546e7d60dd70fb91771a4" exitCode=0 Dec 03 11:09:48 crc kubenswrapper[4756]: I1203 11:09:48.091081 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sskn" event={"ID":"7c727842-0551-4545-a4e6-91db0d4e2f5e","Type":"ContainerDied","Data":"7d3a51f25b19ce96d39419516c3ae4445377e0ed488546e7d60dd70fb91771a4"} Dec 03 11:09:49 crc kubenswrapper[4756]: I1203 11:09:49.101934 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sskn" event={"ID":"7c727842-0551-4545-a4e6-91db0d4e2f5e","Type":"ContainerStarted","Data":"fb3ef66ae961a0458b7d238e61beb3712cf52b1bff54e6b1019c31a81da46c1c"} Dec 03 11:09:49 crc kubenswrapper[4756]: I1203 11:09:49.130287 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9sskn" podStartSLOduration=6.344467598 podStartE2EDuration="8.13025529s" podCreationTimestamp="2025-12-03 11:09:41 +0000 UTC" firstStartedPulling="2025-12-03 11:09:47.06485693 +0000 UTC m=+998.094858174" lastFinishedPulling="2025-12-03 11:09:48.850644622 +0000 UTC m=+999.880645866" observedRunningTime="2025-12-03 11:09:49.124017755 +0000 UTC m=+1000.154018999" watchObservedRunningTime="2025-12-03 11:09:49.13025529 +0000 UTC m=+1000.160256534" Dec 03 11:09:51 crc kubenswrapper[4756]: I1203 11:09:51.689989 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:09:51 crc kubenswrapper[4756]: I1203 11:09:51.690406 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:09:51 crc kubenswrapper[4756]: I1203 11:09:51.739605 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:09:52 crc kubenswrapper[4756]: I1203 11:09:52.607353 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:09:52 crc kubenswrapper[4756]: I1203 11:09:52.607941 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:09:52 crc kubenswrapper[4756]: I1203 11:09:52.608023 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 11:09:52 crc kubenswrapper[4756]: I1203 11:09:52.608823 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09868558298ce0e0fe5ddedbb9f992422ad279637132aad7bc1ac485611cc892"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 11:09:52 crc kubenswrapper[4756]: I1203 11:09:52.608889 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://09868558298ce0e0fe5ddedbb9f992422ad279637132aad7bc1ac485611cc892" gracePeriod=600 Dec 03 11:09:53 crc kubenswrapper[4756]: I1203 11:09:53.133536 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="09868558298ce0e0fe5ddedbb9f992422ad279637132aad7bc1ac485611cc892" exitCode=0 Dec 03 11:09:53 crc kubenswrapper[4756]: I1203 11:09:53.137050 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"09868558298ce0e0fe5ddedbb9f992422ad279637132aad7bc1ac485611cc892"} Dec 03 11:09:53 crc kubenswrapper[4756]: I1203 11:09:53.137255 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"664265e67f7670380e0f58f5627ec2e4920ce372619dd56f4c84d4b06cd1734c"} Dec 03 11:09:53 crc kubenswrapper[4756]: I1203 11:09:53.137300 4756 scope.go:117] "RemoveContainer" containerID="01dfa0931fd4257f5b01935057514c563c7b4e22621a95eaed462238153a1e0f" Dec 03 11:09:58 crc kubenswrapper[4756]: I1203 11:09:58.337231 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-779dc79ddf-bn5rj" Dec 03 11:10:01 crc kubenswrapper[4756]: I1203 11:10:01.961938 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:10:04 crc kubenswrapper[4756]: I1203 11:10:04.822266 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sskn"] Dec 03 11:10:04 crc kubenswrapper[4756]: I1203 11:10:04.823130 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9sskn" podUID="7c727842-0551-4545-a4e6-91db0d4e2f5e" containerName="registry-server" containerID="cri-o://fb3ef66ae961a0458b7d238e61beb3712cf52b1bff54e6b1019c31a81da46c1c" gracePeriod=2 Dec 03 11:10:05 crc kubenswrapper[4756]: I1203 11:10:05.221793 4756 generic.go:334] "Generic (PLEG): container finished" podID="7c727842-0551-4545-a4e6-91db0d4e2f5e" containerID="fb3ef66ae961a0458b7d238e61beb3712cf52b1bff54e6b1019c31a81da46c1c" exitCode=0 Dec 03 11:10:05 crc kubenswrapper[4756]: I1203 11:10:05.221884 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sskn" event={"ID":"7c727842-0551-4545-a4e6-91db0d4e2f5e","Type":"ContainerDied","Data":"fb3ef66ae961a0458b7d238e61beb3712cf52b1bff54e6b1019c31a81da46c1c"} Dec 03 11:10:05 crc kubenswrapper[4756]: I1203 11:10:05.770808 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:10:05 crc kubenswrapper[4756]: I1203 11:10:05.820596 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c727842-0551-4545-a4e6-91db0d4e2f5e-utilities\") pod \"7c727842-0551-4545-a4e6-91db0d4e2f5e\" (UID: \"7c727842-0551-4545-a4e6-91db0d4e2f5e\") " Dec 03 11:10:05 crc kubenswrapper[4756]: I1203 11:10:05.820641 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c727842-0551-4545-a4e6-91db0d4e2f5e-catalog-content\") pod \"7c727842-0551-4545-a4e6-91db0d4e2f5e\" (UID: \"7c727842-0551-4545-a4e6-91db0d4e2f5e\") " Dec 03 11:10:05 crc kubenswrapper[4756]: I1203 11:10:05.820750 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqq8n\" (UniqueName: \"kubernetes.io/projected/7c727842-0551-4545-a4e6-91db0d4e2f5e-kube-api-access-fqq8n\") pod \"7c727842-0551-4545-a4e6-91db0d4e2f5e\" (UID: \"7c727842-0551-4545-a4e6-91db0d4e2f5e\") " Dec 03 11:10:05 crc kubenswrapper[4756]: I1203 11:10:05.822007 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c727842-0551-4545-a4e6-91db0d4e2f5e-utilities" (OuterVolumeSpecName: "utilities") pod "7c727842-0551-4545-a4e6-91db0d4e2f5e" (UID: "7c727842-0551-4545-a4e6-91db0d4e2f5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:10:05 crc kubenswrapper[4756]: I1203 11:10:05.839364 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c727842-0551-4545-a4e6-91db0d4e2f5e-kube-api-access-fqq8n" (OuterVolumeSpecName: "kube-api-access-fqq8n") pod "7c727842-0551-4545-a4e6-91db0d4e2f5e" (UID: "7c727842-0551-4545-a4e6-91db0d4e2f5e"). InnerVolumeSpecName "kube-api-access-fqq8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:10:05 crc kubenswrapper[4756]: I1203 11:10:05.845516 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c727842-0551-4545-a4e6-91db0d4e2f5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c727842-0551-4545-a4e6-91db0d4e2f5e" (UID: "7c727842-0551-4545-a4e6-91db0d4e2f5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:10:05 crc kubenswrapper[4756]: I1203 11:10:05.922235 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c727842-0551-4545-a4e6-91db0d4e2f5e-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:10:05 crc kubenswrapper[4756]: I1203 11:10:05.922267 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c727842-0551-4545-a4e6-91db0d4e2f5e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:10:05 crc kubenswrapper[4756]: I1203 11:10:05.922280 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqq8n\" (UniqueName: \"kubernetes.io/projected/7c727842-0551-4545-a4e6-91db0d4e2f5e-kube-api-access-fqq8n\") on node \"crc\" DevicePath \"\"" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.232509 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9sskn" event={"ID":"7c727842-0551-4545-a4e6-91db0d4e2f5e","Type":"ContainerDied","Data":"cd7558470261e94e81b3eb7360b82c4dea35d1106b40e231128384a553ebf357"} Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.232570 4756 scope.go:117] "RemoveContainer" containerID="fb3ef66ae961a0458b7d238e61beb3712cf52b1bff54e6b1019c31a81da46c1c" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.233795 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9sskn" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.246881 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wrrz6"] Dec 03 11:10:06 crc kubenswrapper[4756]: E1203 11:10:06.247275 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c727842-0551-4545-a4e6-91db0d4e2f5e" containerName="extract-utilities" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.247300 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c727842-0551-4545-a4e6-91db0d4e2f5e" containerName="extract-utilities" Dec 03 11:10:06 crc kubenswrapper[4756]: E1203 11:10:06.247320 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c727842-0551-4545-a4e6-91db0d4e2f5e" containerName="extract-content" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.247331 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c727842-0551-4545-a4e6-91db0d4e2f5e" containerName="extract-content" Dec 03 11:10:06 crc kubenswrapper[4756]: E1203 11:10:06.247361 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c727842-0551-4545-a4e6-91db0d4e2f5e" containerName="registry-server" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.247370 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c727842-0551-4545-a4e6-91db0d4e2f5e" containerName="registry-server" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.247530 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c727842-0551-4545-a4e6-91db0d4e2f5e" containerName="registry-server" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.248692 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.253231 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wrrz6"] Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.274537 4756 scope.go:117] "RemoveContainer" containerID="7d3a51f25b19ce96d39419516c3ae4445377e0ed488546e7d60dd70fb91771a4" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.298368 4756 scope.go:117] "RemoveContainer" containerID="209a66dbbb07535ffac7c465ae403e53eba6da93e82c29bea516274d15f8d462" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.308168 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sskn"] Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.314651 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9sskn"] Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.429301 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-utilities\") pod \"certified-operators-wrrz6\" (UID: \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\") " pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.429771 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j88n7\" (UniqueName: \"kubernetes.io/projected/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-kube-api-access-j88n7\") pod \"certified-operators-wrrz6\" (UID: \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\") " pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.429980 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-catalog-content\") pod \"certified-operators-wrrz6\" (UID: \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\") " pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.531773 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-catalog-content\") pod \"certified-operators-wrrz6\" (UID: \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\") " pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.531879 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-utilities\") pod \"certified-operators-wrrz6\" (UID: \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\") " pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.531912 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j88n7\" (UniqueName: \"kubernetes.io/projected/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-kube-api-access-j88n7\") pod \"certified-operators-wrrz6\" (UID: \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\") " pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.532611 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-catalog-content\") pod \"certified-operators-wrrz6\" (UID: \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\") " pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.532640 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-utilities\") pod \"certified-operators-wrrz6\" (UID: \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\") " pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.550218 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j88n7\" (UniqueName: \"kubernetes.io/projected/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-kube-api-access-j88n7\") pod \"certified-operators-wrrz6\" (UID: \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\") " pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:06 crc kubenswrapper[4756]: I1203 11:10:06.576751 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:07 crc kubenswrapper[4756]: I1203 11:10:07.008932 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wrrz6"] Dec 03 11:10:07 crc kubenswrapper[4756]: W1203 11:10:07.017705 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3517ce5b_ffee_43c6_9f19_ffb86c6b8ac6.slice/crio-e9c655669ead6b4da37d15f5563e8d1f01d866336b0f87a5679e1143dbc4b2c5 WatchSource:0}: Error finding container e9c655669ead6b4da37d15f5563e8d1f01d866336b0f87a5679e1143dbc4b2c5: Status 404 returned error can't find the container with id e9c655669ead6b4da37d15f5563e8d1f01d866336b0f87a5679e1143dbc4b2c5 Dec 03 11:10:07 crc kubenswrapper[4756]: I1203 11:10:07.240859 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c727842-0551-4545-a4e6-91db0d4e2f5e" path="/var/lib/kubelet/pods/7c727842-0551-4545-a4e6-91db0d4e2f5e/volumes" Dec 03 11:10:07 crc kubenswrapper[4756]: I1203 11:10:07.242811 4756 generic.go:334] "Generic (PLEG): container finished" podID="3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" containerID="2870012ffb69260337f9ca1c209e7472e486e06eabe74b61ed878661d8dce8ab" exitCode=0 Dec 03 11:10:07 crc kubenswrapper[4756]: I1203 11:10:07.242841 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrrz6" event={"ID":"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6","Type":"ContainerDied","Data":"2870012ffb69260337f9ca1c209e7472e486e06eabe74b61ed878661d8dce8ab"} Dec 03 11:10:07 crc kubenswrapper[4756]: I1203 11:10:07.242865 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrrz6" event={"ID":"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6","Type":"ContainerStarted","Data":"e9c655669ead6b4da37d15f5563e8d1f01d866336b0f87a5679e1143dbc4b2c5"} Dec 03 11:10:13 crc kubenswrapper[4756]: I1203 11:10:13.310166 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrrz6" event={"ID":"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6","Type":"ContainerStarted","Data":"3b8919e283b78f96efc4da53c1fa718302640c12746e698f2318564e47685a4b"} Dec 03 11:10:14 crc kubenswrapper[4756]: I1203 11:10:14.321630 4756 generic.go:334] "Generic (PLEG): container finished" podID="3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" containerID="3b8919e283b78f96efc4da53c1fa718302640c12746e698f2318564e47685a4b" exitCode=0 Dec 03 11:10:14 crc kubenswrapper[4756]: I1203 11:10:14.321740 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrrz6" event={"ID":"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6","Type":"ContainerDied","Data":"3b8919e283b78f96efc4da53c1fa718302640c12746e698f2318564e47685a4b"} Dec 03 11:10:15 crc kubenswrapper[4756]: I1203 11:10:15.382476 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrrz6" event={"ID":"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6","Type":"ContainerStarted","Data":"d646316d762167926c928bdd22c76beaf7bc2f2a99d1b83838aaa99d3e5e1cbf"} Dec 03 11:10:15 crc kubenswrapper[4756]: I1203 11:10:15.496063 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wrrz6" podStartSLOduration=1.887977683 podStartE2EDuration="9.496003375s" podCreationTimestamp="2025-12-03 11:10:06 +0000 UTC" firstStartedPulling="2025-12-03 11:10:07.244487087 +0000 UTC m=+1018.274488331" lastFinishedPulling="2025-12-03 11:10:14.852512779 +0000 UTC m=+1025.882514023" observedRunningTime="2025-12-03 11:10:15.488759969 +0000 UTC m=+1026.518761233" watchObservedRunningTime="2025-12-03 11:10:15.496003375 +0000 UTC m=+1026.526004629" Dec 03 11:10:16 crc kubenswrapper[4756]: I1203 11:10:16.577654 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:16 crc kubenswrapper[4756]: I1203 11:10:16.577727 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:17 crc kubenswrapper[4756]: I1203 11:10:17.634630 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wrrz6" podUID="3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" containerName="registry-server" probeResult="failure" output=< Dec 03 11:10:17 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Dec 03 11:10:17 crc kubenswrapper[4756]: > Dec 03 11:10:26 crc kubenswrapper[4756]: I1203 11:10:26.621336 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:26 crc kubenswrapper[4756]: I1203 11:10:26.682220 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:10:26 crc kubenswrapper[4756]: I1203 11:10:26.765449 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wrrz6"] Dec 03 11:10:26 crc kubenswrapper[4756]: I1203 11:10:26.857916 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fqt2k"] Dec 03 11:10:26 crc kubenswrapper[4756]: I1203 11:10:26.858259 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fqt2k" podUID="24f50f1e-1793-40ca-b105-31910761c4ed" containerName="registry-server" containerID="cri-o://2f6416d9132cfb27d34443f65b2b1ac44336611b1bb84e2efc4dda897b9cebe5" gracePeriod=2 Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.459359 4756 generic.go:334] "Generic (PLEG): container finished" podID="24f50f1e-1793-40ca-b105-31910761c4ed" containerID="2f6416d9132cfb27d34443f65b2b1ac44336611b1bb84e2efc4dda897b9cebe5" exitCode=0 Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.459611 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqt2k" event={"ID":"24f50f1e-1793-40ca-b105-31910761c4ed","Type":"ContainerDied","Data":"2f6416d9132cfb27d34443f65b2b1ac44336611b1bb84e2efc4dda897b9cebe5"} Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.778134 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8"] Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.779732 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8" Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.784252 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-hbmvw" Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.788454 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-vqlmr"] Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.790363 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-vqlmr" Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.794844 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xz96f" Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.816087 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8"] Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.879052 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-vqlmr"] Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.879930 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwcdp\" (UniqueName: \"kubernetes.io/projected/02ff397c-db50-4b3b-be2c-b43dcd1c71db-kube-api-access-bwcdp\") pod \"barbican-operator-controller-manager-7d9dfd778-cg2g8\" (UID: \"02ff397c-db50-4b3b-be2c-b43dcd1c71db\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8" Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.879972 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2hpq\" (UniqueName: \"kubernetes.io/projected/22eded41-fc81-4a9c-b831-8cfb8d339258-kube-api-access-t2hpq\") pod \"cinder-operator-controller-manager-859b6ccc6-vqlmr\" (UID: \"22eded41-fc81-4a9c-b831-8cfb8d339258\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-vqlmr" Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.908157 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-62955"] Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.909628 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-62955" Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.916386 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-k5xbf" Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.924830 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-q5klh"] Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.928260 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q5klh" Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.932722 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-thsrf" Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.969334 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-62955"] Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.977513 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-crd8q"] Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.979478 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-crd8q" Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.981175 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwcdp\" (UniqueName: \"kubernetes.io/projected/02ff397c-db50-4b3b-be2c-b43dcd1c71db-kube-api-access-bwcdp\") pod \"barbican-operator-controller-manager-7d9dfd778-cg2g8\" (UID: \"02ff397c-db50-4b3b-be2c-b43dcd1c71db\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8" Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.981335 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2hpq\" (UniqueName: \"kubernetes.io/projected/22eded41-fc81-4a9c-b831-8cfb8d339258-kube-api-access-t2hpq\") pod \"cinder-operator-controller-manager-859b6ccc6-vqlmr\" (UID: \"22eded41-fc81-4a9c-b831-8cfb8d339258\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-vqlmr" Dec 03 11:10:27 crc kubenswrapper[4756]: I1203 11:10:27.991023 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-8r5ps" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.003859 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-q5klh"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.025511 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gqmt7"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.026940 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gqmt7" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.030544 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwcdp\" (UniqueName: \"kubernetes.io/projected/02ff397c-db50-4b3b-be2c-b43dcd1c71db-kube-api-access-bwcdp\") pod \"barbican-operator-controller-manager-7d9dfd778-cg2g8\" (UID: \"02ff397c-db50-4b3b-be2c-b43dcd1c71db\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.030636 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-jwd5w" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.033884 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2hpq\" (UniqueName: \"kubernetes.io/projected/22eded41-fc81-4a9c-b831-8cfb8d339258-kube-api-access-t2hpq\") pod \"cinder-operator-controller-manager-859b6ccc6-vqlmr\" (UID: \"22eded41-fc81-4a9c-b831-8cfb8d339258\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-vqlmr" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.039260 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.041055 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.044034 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.051059 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-s554b" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.057284 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gqmt7"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.071331 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-crd8q"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.076121 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-hc7n7"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.077657 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hc7n7" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.082833 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24lf8\" (UniqueName: \"kubernetes.io/projected/d605c327-c0d8-4466-b135-a1c8c777b91c-kube-api-access-24lf8\") pod \"horizon-operator-controller-manager-68c6d99b8f-crd8q\" (UID: \"d605c327-c0d8-4466-b135-a1c8c777b91c\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-crd8q" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.082899 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnl6w\" (UniqueName: \"kubernetes.io/projected/77626dde-3586-4b33-b4a4-326bab5bfe19-kube-api-access-xnl6w\") pod \"glance-operator-controller-manager-77987cd8cd-q5klh\" (UID: \"77626dde-3586-4b33-b4a4-326bab5bfe19\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q5klh" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.082978 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbxtm\" (UniqueName: \"kubernetes.io/projected/f651f26f-55f4-47cb-a318-0e2a9512f194-kube-api-access-lbxtm\") pod \"designate-operator-controller-manager-78b4bc895b-62955\" (UID: \"f651f26f-55f4-47cb-a318-0e2a9512f194\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-62955" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.085656 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.086891 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.092541 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-rrbzh" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.092795 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-2bkdm" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.123760 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.125251 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.150460 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-hc7n7"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.166997 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-vqlmr" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.172623 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.186286 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbxtm\" (UniqueName: \"kubernetes.io/projected/f651f26f-55f4-47cb-a318-0e2a9512f194-kube-api-access-lbxtm\") pod \"designate-operator-controller-manager-78b4bc895b-62955\" (UID: \"f651f26f-55f4-47cb-a318-0e2a9512f194\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-62955" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.186357 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert\") pod \"infra-operator-controller-manager-57548d458d-hpd8p\" (UID: \"0df6dfa3-00de-4da1-a132-358b5f6a66e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.186397 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wddpc\" (UniqueName: \"kubernetes.io/projected/0f3ba133-8575-4603-ad87-77502244b892-kube-api-access-wddpc\") pod \"heat-operator-controller-manager-5f64f6f8bb-gqmt7\" (UID: \"0f3ba133-8575-4603-ad87-77502244b892\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gqmt7" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.186445 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqcz4\" (UniqueName: \"kubernetes.io/projected/786fcded-8103-4691-b8a4-fa6ef5b79ee6-kube-api-access-kqcz4\") pod \"ironic-operator-controller-manager-6c548fd776-hc7n7\" (UID: \"786fcded-8103-4691-b8a4-fa6ef5b79ee6\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hc7n7" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.186482 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24lf8\" (UniqueName: \"kubernetes.io/projected/d605c327-c0d8-4466-b135-a1c8c777b91c-kube-api-access-24lf8\") pod \"horizon-operator-controller-manager-68c6d99b8f-crd8q\" (UID: \"d605c327-c0d8-4466-b135-a1c8c777b91c\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-crd8q" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.186514 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gtqq\" (UniqueName: \"kubernetes.io/projected/0df6dfa3-00de-4da1-a132-358b5f6a66e9-kube-api-access-4gtqq\") pod \"infra-operator-controller-manager-57548d458d-hpd8p\" (UID: \"0df6dfa3-00de-4da1-a132-358b5f6a66e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.186550 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnl6w\" (UniqueName: \"kubernetes.io/projected/77626dde-3586-4b33-b4a4-326bab5bfe19-kube-api-access-xnl6w\") pod \"glance-operator-controller-manager-77987cd8cd-q5klh\" (UID: \"77626dde-3586-4b33-b4a4-326bab5bfe19\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q5klh" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.186584 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gtnc\" (UniqueName: \"kubernetes.io/projected/ed53f929-2cac-4053-8875-ad53414156c1-kube-api-access-7gtnc\") pod \"keystone-operator-controller-manager-7765d96ddf-bgw6p\" (UID: \"ed53f929-2cac-4053-8875-ad53414156c1\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.227814 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnl6w\" (UniqueName: \"kubernetes.io/projected/77626dde-3586-4b33-b4a4-326bab5bfe19-kube-api-access-xnl6w\") pod \"glance-operator-controller-manager-77987cd8cd-q5klh\" (UID: \"77626dde-3586-4b33-b4a4-326bab5bfe19\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q5klh" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.239775 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbxtm\" (UniqueName: \"kubernetes.io/projected/f651f26f-55f4-47cb-a318-0e2a9512f194-kube-api-access-lbxtm\") pod \"designate-operator-controller-manager-78b4bc895b-62955\" (UID: \"f651f26f-55f4-47cb-a318-0e2a9512f194\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-62955" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.244251 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-62955" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.245124 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24lf8\" (UniqueName: \"kubernetes.io/projected/d605c327-c0d8-4466-b135-a1c8c777b91c-kube-api-access-24lf8\") pod \"horizon-operator-controller-manager-68c6d99b8f-crd8q\" (UID: \"d605c327-c0d8-4466-b135-a1c8c777b91c\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-crd8q" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.267569 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-dk746"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.268751 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-dk746" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.280686 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2kq6w"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.281925 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2kq6w" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.283068 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rvnch" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.283466 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q5klh" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.283634 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-5tnwd" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.288825 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert\") pod \"infra-operator-controller-manager-57548d458d-hpd8p\" (UID: \"0df6dfa3-00de-4da1-a132-358b5f6a66e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.288890 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wddpc\" (UniqueName: \"kubernetes.io/projected/0f3ba133-8575-4603-ad87-77502244b892-kube-api-access-wddpc\") pod \"heat-operator-controller-manager-5f64f6f8bb-gqmt7\" (UID: \"0f3ba133-8575-4603-ad87-77502244b892\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gqmt7" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.288930 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqcz4\" (UniqueName: \"kubernetes.io/projected/786fcded-8103-4691-b8a4-fa6ef5b79ee6-kube-api-access-kqcz4\") pod \"ironic-operator-controller-manager-6c548fd776-hc7n7\" (UID: \"786fcded-8103-4691-b8a4-fa6ef5b79ee6\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hc7n7" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.289010 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gtqq\" (UniqueName: \"kubernetes.io/projected/0df6dfa3-00de-4da1-a132-358b5f6a66e9-kube-api-access-4gtqq\") pod \"infra-operator-controller-manager-57548d458d-hpd8p\" (UID: \"0df6dfa3-00de-4da1-a132-358b5f6a66e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.289049 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gtnc\" (UniqueName: \"kubernetes.io/projected/ed53f929-2cac-4053-8875-ad53414156c1-kube-api-access-7gtnc\") pod \"keystone-operator-controller-manager-7765d96ddf-bgw6p\" (UID: \"ed53f929-2cac-4053-8875-ad53414156c1\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p" Dec 03 11:10:28 crc kubenswrapper[4756]: E1203 11:10:28.289084 4756 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 11:10:28 crc kubenswrapper[4756]: E1203 11:10:28.289164 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert podName:0df6dfa3-00de-4da1-a132-358b5f6a66e9 nodeName:}" failed. No retries permitted until 2025-12-03 11:10:28.789134046 +0000 UTC m=+1039.819135290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert") pod "infra-operator-controller-manager-57548d458d-hpd8p" (UID: "0df6dfa3-00de-4da1-a132-358b5f6a66e9") : secret "infra-operator-webhook-server-cert" not found Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.322797 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gtnc\" (UniqueName: \"kubernetes.io/projected/ed53f929-2cac-4053-8875-ad53414156c1-kube-api-access-7gtnc\") pod \"keystone-operator-controller-manager-7765d96ddf-bgw6p\" (UID: \"ed53f929-2cac-4053-8875-ad53414156c1\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.332742 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqcz4\" (UniqueName: \"kubernetes.io/projected/786fcded-8103-4691-b8a4-fa6ef5b79ee6-kube-api-access-kqcz4\") pod \"ironic-operator-controller-manager-6c548fd776-hc7n7\" (UID: \"786fcded-8103-4691-b8a4-fa6ef5b79ee6\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hc7n7" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.343460 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-crd8q" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.344257 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wddpc\" (UniqueName: \"kubernetes.io/projected/0f3ba133-8575-4603-ad87-77502244b892-kube-api-access-wddpc\") pod \"heat-operator-controller-manager-5f64f6f8bb-gqmt7\" (UID: \"0f3ba133-8575-4603-ad87-77502244b892\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gqmt7" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.345528 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.358168 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gtqq\" (UniqueName: \"kubernetes.io/projected/0df6dfa3-00de-4da1-a132-358b5f6a66e9-kube-api-access-4gtqq\") pod \"infra-operator-controller-manager-57548d458d-hpd8p\" (UID: \"0df6dfa3-00de-4da1-a132-358b5f6a66e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.368288 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-dk746"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.388103 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2kq6w"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.399052 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77tc6\" (UniqueName: \"kubernetes.io/projected/0606b8cc-309a-4759-82d9-989ef224169b-kube-api-access-77tc6\") pod \"mariadb-operator-controller-manager-56bbcc9d85-2kq6w\" (UID: \"0606b8cc-309a-4759-82d9-989ef224169b\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2kq6w" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.399573 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzs7v\" (UniqueName: \"kubernetes.io/projected/419295e1-b487-4701-9fc8-0273a49277dc-kube-api-access-wzs7v\") pod \"manila-operator-controller-manager-7c79b5df47-dk746\" (UID: \"419295e1-b487-4701-9fc8-0273a49277dc\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-dk746" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.402250 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn"] Dec 03 11:10:28 crc kubenswrapper[4756]: E1203 11:10:28.403058 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f50f1e-1793-40ca-b105-31910761c4ed" containerName="registry-server" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.403080 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f50f1e-1793-40ca-b105-31910761c4ed" containerName="registry-server" Dec 03 11:10:28 crc kubenswrapper[4756]: E1203 11:10:28.403113 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f50f1e-1793-40ca-b105-31910761c4ed" containerName="extract-utilities" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.403125 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f50f1e-1793-40ca-b105-31910761c4ed" containerName="extract-utilities" Dec 03 11:10:28 crc kubenswrapper[4756]: E1203 11:10:28.403144 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f50f1e-1793-40ca-b105-31910761c4ed" containerName="extract-content" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.403151 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f50f1e-1793-40ca-b105-31910761c4ed" containerName="extract-content" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.403487 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f50f1e-1793-40ca-b105-31910761c4ed" containerName="registry-server" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.405625 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.409605 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-v277n" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.436016 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.437642 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.453465 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gqmt7" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.451579 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-pjdx4" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.476540 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.502932 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24f50f1e-1793-40ca-b105-31910761c4ed-catalog-content\") pod \"24f50f1e-1793-40ca-b105-31910761c4ed\" (UID: \"24f50f1e-1793-40ca-b105-31910761c4ed\") " Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.503568 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh2w2\" (UniqueName: \"kubernetes.io/projected/24f50f1e-1793-40ca-b105-31910761c4ed-kube-api-access-sh2w2\") pod \"24f50f1e-1793-40ca-b105-31910761c4ed\" (UID: \"24f50f1e-1793-40ca-b105-31910761c4ed\") " Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.503641 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24f50f1e-1793-40ca-b105-31910761c4ed-utilities\") pod \"24f50f1e-1793-40ca-b105-31910761c4ed\" (UID: \"24f50f1e-1793-40ca-b105-31910761c4ed\") " Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.504037 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb5kl\" (UniqueName: \"kubernetes.io/projected/ef02c569-7cc5-43a3-a4e9-d8c97cc07465-kube-api-access-cb5kl\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-xdlgn\" (UID: \"ef02c569-7cc5-43a3-a4e9-d8c97cc07465\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.504121 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77tc6\" (UniqueName: \"kubernetes.io/projected/0606b8cc-309a-4759-82d9-989ef224169b-kube-api-access-77tc6\") pod \"mariadb-operator-controller-manager-56bbcc9d85-2kq6w\" (UID: \"0606b8cc-309a-4759-82d9-989ef224169b\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2kq6w" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.504154 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzs7v\" (UniqueName: \"kubernetes.io/projected/419295e1-b487-4701-9fc8-0273a49277dc-kube-api-access-wzs7v\") pod \"manila-operator-controller-manager-7c79b5df47-dk746\" (UID: \"419295e1-b487-4701-9fc8-0273a49277dc\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-dk746" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.508278 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24f50f1e-1793-40ca-b105-31910761c4ed-utilities" (OuterVolumeSpecName: "utilities") pod "24f50f1e-1793-40ca-b105-31910761c4ed" (UID: "24f50f1e-1793-40ca-b105-31910761c4ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.519294 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.527605 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fqt2k" event={"ID":"24f50f1e-1793-40ca-b105-31910761c4ed","Type":"ContainerDied","Data":"1a44500d43aafd1b4c4ea4c278108e4ffc70a29b61fe25a37f99f714f27561dd"} Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.527665 4756 scope.go:117] "RemoveContainer" containerID="2f6416d9132cfb27d34443f65b2b1ac44336611b1bb84e2efc4dda897b9cebe5" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.528317 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fqt2k" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.552386 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77tc6\" (UniqueName: \"kubernetes.io/projected/0606b8cc-309a-4759-82d9-989ef224169b-kube-api-access-77tc6\") pod \"mariadb-operator-controller-manager-56bbcc9d85-2kq6w\" (UID: \"0606b8cc-309a-4759-82d9-989ef224169b\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2kq6w" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.563974 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hc7n7" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.583667 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.588497 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-wlfff"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.589785 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-wlfff" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.593885 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jzzsx" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.598031 4756 scope.go:117] "RemoveContainer" containerID="c9da5abf19aa10ac30dd4f4348881a67fb195a38389a0cf42f0bd9d5c072daa5" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.607124 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb5kl\" (UniqueName: \"kubernetes.io/projected/ef02c569-7cc5-43a3-a4e9-d8c97cc07465-kube-api-access-cb5kl\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-xdlgn\" (UID: \"ef02c569-7cc5-43a3-a4e9-d8c97cc07465\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.609796 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dxff\" (UniqueName: \"kubernetes.io/projected/53d098b0-4833-4b74-b13d-57a4d9c5ee13-kube-api-access-6dxff\") pod \"nova-operator-controller-manager-697bc559fc-cxgxw\" (UID: \"53d098b0-4833-4b74-b13d-57a4d9c5ee13\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.610082 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24f50f1e-1793-40ca-b105-31910761c4ed-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.639305 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-wlfff"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.644852 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.646317 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2kq6w" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.653053 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.658497 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-76n4q" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.659008 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24f50f1e-1793-40ca-b105-31910761c4ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24f50f1e-1793-40ca-b105-31910761c4ed" (UID: "24f50f1e-1793-40ca-b105-31910761c4ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.666401 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24f50f1e-1793-40ca-b105-31910761c4ed-kube-api-access-sh2w2" (OuterVolumeSpecName: "kube-api-access-sh2w2") pod "24f50f1e-1793-40ca-b105-31910761c4ed" (UID: "24f50f1e-1793-40ca-b105-31910761c4ed"). InnerVolumeSpecName "kube-api-access-sh2w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.668724 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb5kl\" (UniqueName: \"kubernetes.io/projected/ef02c569-7cc5-43a3-a4e9-d8c97cc07465-kube-api-access-cb5kl\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-xdlgn\" (UID: \"ef02c569-7cc5-43a3-a4e9-d8c97cc07465\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.670746 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.674491 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzs7v\" (UniqueName: \"kubernetes.io/projected/419295e1-b487-4701-9fc8-0273a49277dc-kube-api-access-wzs7v\") pod \"manila-operator-controller-manager-7c79b5df47-dk746\" (UID: \"419295e1-b487-4701-9fc8-0273a49277dc\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-dk746" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.696521 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.710405 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.711370 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dxff\" (UniqueName: \"kubernetes.io/projected/53d098b0-4833-4b74-b13d-57a4d9c5ee13-kube-api-access-6dxff\") pod \"nova-operator-controller-manager-697bc559fc-cxgxw\" (UID: \"53d098b0-4833-4b74-b13d-57a4d9c5ee13\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.711419 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtjlb\" (UniqueName: \"kubernetes.io/projected/52cfac1b-a56f-4189-b9f0-5c4a8acf2069-kube-api-access-qtjlb\") pod \"octavia-operator-controller-manager-998648c74-wlfff\" (UID: \"52cfac1b-a56f-4189-b9f0-5c4a8acf2069\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-wlfff" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.711459 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24f50f1e-1793-40ca-b105-31910761c4ed-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.711471 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh2w2\" (UniqueName: \"kubernetes.io/projected/24f50f1e-1793-40ca-b105-31910761c4ed-kube-api-access-sh2w2\") on node \"crc\" DevicePath \"\"" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.711933 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.728027 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.729680 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.740844 4756 scope.go:117] "RemoveContainer" containerID="0b204130c7a7d959e7d46b07db1f929deb0bdad806c18c9d3dd19e460c12a016" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.750677 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-n97wt" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.751010 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-dcnmd" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.752627 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.778228 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.803238 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dxff\" (UniqueName: \"kubernetes.io/projected/53d098b0-4833-4b74-b13d-57a4d9c5ee13-kube-api-access-6dxff\") pod \"nova-operator-controller-manager-697bc559fc-cxgxw\" (UID: \"53d098b0-4833-4b74-b13d-57a4d9c5ee13\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.804650 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.810802 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.813062 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l56s\" (UniqueName: \"kubernetes.io/projected/ded35123-da5e-4a26-9caf-61d6c9d920cd-kube-api-access-8l56s\") pod \"placement-operator-controller-manager-78f8948974-9r6sc\" (UID: \"ded35123-da5e-4a26-9caf-61d6c9d920cd\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.813097 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4\" (UID: \"02b366f7-138d-4a78-9772-8e22db219753\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.813285 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bpgv\" (UniqueName: \"kubernetes.io/projected/02b366f7-138d-4a78-9772-8e22db219753-kube-api-access-5bpgv\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4\" (UID: \"02b366f7-138d-4a78-9772-8e22db219753\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.813374 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtjlb\" (UniqueName: \"kubernetes.io/projected/52cfac1b-a56f-4189-b9f0-5c4a8acf2069-kube-api-access-qtjlb\") pod \"octavia-operator-controller-manager-998648c74-wlfff\" (UID: \"52cfac1b-a56f-4189-b9f0-5c4a8acf2069\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-wlfff" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.813443 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkh5f\" (UniqueName: \"kubernetes.io/projected/ce02ca0e-abd9-4a57-a68d-7d35f304f8fa-kube-api-access-bkh5f\") pod \"ovn-operator-controller-manager-b6456fdb6-bbqwn\" (UID: \"ce02ca0e-abd9-4a57-a68d-7d35f304f8fa\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.813595 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert\") pod \"infra-operator-controller-manager-57548d458d-hpd8p\" (UID: \"0df6dfa3-00de-4da1-a132-358b5f6a66e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:10:28 crc kubenswrapper[4756]: E1203 11:10:28.813776 4756 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 11:10:28 crc kubenswrapper[4756]: E1203 11:10:28.813824 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert podName:0df6dfa3-00de-4da1-a132-358b5f6a66e9 nodeName:}" failed. No retries permitted until 2025-12-03 11:10:29.813807134 +0000 UTC m=+1040.843808368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert") pod "infra-operator-controller-manager-57548d458d-hpd8p" (UID: "0df6dfa3-00de-4da1-a132-358b5f6a66e9") : secret "infra-operator-webhook-server-cert" not found Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.819842 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.824477 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-xqjtr" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.825488 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-srv8g"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.828269 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-srv8g" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.830582 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-sphjb" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.835563 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtjlb\" (UniqueName: \"kubernetes.io/projected/52cfac1b-a56f-4189-b9f0-5c4a8acf2069-kube-api-access-qtjlb\") pod \"octavia-operator-controller-manager-998648c74-wlfff\" (UID: \"52cfac1b-a56f-4189-b9f0-5c4a8acf2069\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-wlfff" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.841495 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.856767 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.866744 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.868531 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.872463 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-whs99" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.895248 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-srv8g"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.913054 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-g6zjw"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.915514 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkh5f\" (UniqueName: \"kubernetes.io/projected/ce02ca0e-abd9-4a57-a68d-7d35f304f8fa-kube-api-access-bkh5f\") pod \"ovn-operator-controller-manager-b6456fdb6-bbqwn\" (UID: \"ce02ca0e-abd9-4a57-a68d-7d35f304f8fa\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.915606 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtwdg\" (UniqueName: \"kubernetes.io/projected/6c2df820-90b9-48fe-8dd0-8731028d0dbd-kube-api-access-vtwdg\") pod \"telemetry-operator-controller-manager-76cc84c6bb-9t6v5\" (UID: \"6c2df820-90b9-48fe-8dd0-8731028d0dbd\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.915655 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjh2v\" (UniqueName: \"kubernetes.io/projected/a5dd2be8-1335-43f1-83af-2a0efabcce1e-kube-api-access-rjh2v\") pod \"test-operator-controller-manager-5854674fcc-srv8g\" (UID: \"a5dd2be8-1335-43f1-83af-2a0efabcce1e\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-srv8g" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.915700 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l56s\" (UniqueName: \"kubernetes.io/projected/ded35123-da5e-4a26-9caf-61d6c9d920cd-kube-api-access-8l56s\") pod \"placement-operator-controller-manager-78f8948974-9r6sc\" (UID: \"ded35123-da5e-4a26-9caf-61d6c9d920cd\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.915730 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4\" (UID: \"02b366f7-138d-4a78-9772-8e22db219753\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.915778 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bpgv\" (UniqueName: \"kubernetes.io/projected/02b366f7-138d-4a78-9772-8e22db219753-kube-api-access-5bpgv\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4\" (UID: \"02b366f7-138d-4a78-9772-8e22db219753\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.917565 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-g6zjw" Dec 03 11:10:28 crc kubenswrapper[4756]: E1203 11:10:28.918761 4756 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 11:10:28 crc kubenswrapper[4756]: E1203 11:10:28.918814 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert podName:02b366f7-138d-4a78-9772-8e22db219753 nodeName:}" failed. No retries permitted until 2025-12-03 11:10:29.418795911 +0000 UTC m=+1040.448797165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" (UID: "02b366f7-138d-4a78-9772-8e22db219753") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.923041 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-lfllv" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.923488 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-dk746" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.928463 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.943552 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkh5f\" (UniqueName: \"kubernetes.io/projected/ce02ca0e-abd9-4a57-a68d-7d35f304f8fa-kube-api-access-bkh5f\") pod \"ovn-operator-controller-manager-b6456fdb6-bbqwn\" (UID: \"ce02ca0e-abd9-4a57-a68d-7d35f304f8fa\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.944661 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-wlfff" Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.955255 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-g6zjw"] Dec 03 11:10:28 crc kubenswrapper[4756]: I1203 11:10:28.964795 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l56s\" (UniqueName: \"kubernetes.io/projected/ded35123-da5e-4a26-9caf-61d6c9d920cd-kube-api-access-8l56s\") pod \"placement-operator-controller-manager-78f8948974-9r6sc\" (UID: \"ded35123-da5e-4a26-9caf-61d6c9d920cd\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.014066 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bpgv\" (UniqueName: \"kubernetes.io/projected/02b366f7-138d-4a78-9772-8e22db219753-kube-api-access-5bpgv\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4\" (UID: \"02b366f7-138d-4a78-9772-8e22db219753\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.016837 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqvbb\" (UniqueName: \"kubernetes.io/projected/e8df208a-e66b-4532-bb65-8c673f2659bc-kube-api-access-qqvbb\") pod \"swift-operator-controller-manager-5f8c65bbfc-v4kqz\" (UID: \"e8df208a-e66b-4532-bb65-8c673f2659bc\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.016920 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtwdg\" (UniqueName: \"kubernetes.io/projected/6c2df820-90b9-48fe-8dd0-8731028d0dbd-kube-api-access-vtwdg\") pod \"telemetry-operator-controller-manager-76cc84c6bb-9t6v5\" (UID: \"6c2df820-90b9-48fe-8dd0-8731028d0dbd\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.016970 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn5kf\" (UniqueName: \"kubernetes.io/projected/77043fde-1e4d-4590-b97c-3de89953581a-kube-api-access-hn5kf\") pod \"watcher-operator-controller-manager-769dc69bc-g6zjw\" (UID: \"77043fde-1e4d-4590-b97c-3de89953581a\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-g6zjw" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.016991 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjh2v\" (UniqueName: \"kubernetes.io/projected/a5dd2be8-1335-43f1-83af-2a0efabcce1e-kube-api-access-rjh2v\") pod \"test-operator-controller-manager-5854674fcc-srv8g\" (UID: \"a5dd2be8-1335-43f1-83af-2a0efabcce1e\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-srv8g" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.074429 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.078979 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjh2v\" (UniqueName: \"kubernetes.io/projected/a5dd2be8-1335-43f1-83af-2a0efabcce1e-kube-api-access-rjh2v\") pod \"test-operator-controller-manager-5854674fcc-srv8g\" (UID: \"a5dd2be8-1335-43f1-83af-2a0efabcce1e\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-srv8g" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.097324 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.100131 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtwdg\" (UniqueName: \"kubernetes.io/projected/6c2df820-90b9-48fe-8dd0-8731028d0dbd-kube-api-access-vtwdg\") pod \"telemetry-operator-controller-manager-76cc84c6bb-9t6v5\" (UID: \"6c2df820-90b9-48fe-8dd0-8731028d0dbd\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.132439 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn5kf\" (UniqueName: \"kubernetes.io/projected/77043fde-1e4d-4590-b97c-3de89953581a-kube-api-access-hn5kf\") pod \"watcher-operator-controller-manager-769dc69bc-g6zjw\" (UID: \"77043fde-1e4d-4590-b97c-3de89953581a\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-g6zjw" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.132599 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqvbb\" (UniqueName: \"kubernetes.io/projected/e8df208a-e66b-4532-bb65-8c673f2659bc-kube-api-access-qqvbb\") pod \"swift-operator-controller-manager-5f8c65bbfc-v4kqz\" (UID: \"e8df208a-e66b-4532-bb65-8c673f2659bc\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.133226 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph"] Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.142589 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.145772 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.174976 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-srv8g" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.182850 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.184735 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4p49c" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.186891 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.224232 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqvbb\" (UniqueName: \"kubernetes.io/projected/e8df208a-e66b-4532-bb65-8c673f2659bc-kube-api-access-qqvbb\") pod \"swift-operator-controller-manager-5f8c65bbfc-v4kqz\" (UID: \"e8df208a-e66b-4532-bb65-8c673f2659bc\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.224574 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph"] Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.232261 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p"] Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.233272 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.236088 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwv9b\" (UniqueName: \"kubernetes.io/projected/d88ee2e4-3954-487f-991e-b0f3e66b176a-kube-api-access-fwv9b\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.236181 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.236206 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.238679 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-hvk92" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.254167 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn5kf\" (UniqueName: \"kubernetes.io/projected/77043fde-1e4d-4590-b97c-3de89953581a-kube-api-access-hn5kf\") pod \"watcher-operator-controller-manager-769dc69bc-g6zjw\" (UID: \"77043fde-1e4d-4590-b97c-3de89953581a\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-g6zjw" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.328429 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-g6zjw" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.336594 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p"] Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.336652 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fqt2k"] Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.336669 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fqt2k"] Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.344942 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzlqm\" (UniqueName: \"kubernetes.io/projected/0b3df789-34ca-4ab2-8fb7-a8fee4df46a7-kube-api-access-lzlqm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fkx8p\" (UID: \"0b3df789-34ca-4ab2-8fb7-a8fee4df46a7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.345196 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.345331 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:29 crc kubenswrapper[4756]: E1203 11:10:29.352025 4756 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 11:10:29 crc kubenswrapper[4756]: E1203 11:10:29.352207 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs podName:d88ee2e4-3954-487f-991e-b0f3e66b176a nodeName:}" failed. No retries permitted until 2025-12-03 11:10:29.85218616 +0000 UTC m=+1040.882187404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs") pod "openstack-operator-controller-manager-6c5c989645-dsrph" (UID: "d88ee2e4-3954-487f-991e-b0f3e66b176a") : secret "metrics-server-cert" not found Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.352603 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwv9b\" (UniqueName: \"kubernetes.io/projected/d88ee2e4-3954-487f-991e-b0f3e66b176a-kube-api-access-fwv9b\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:29 crc kubenswrapper[4756]: E1203 11:10:29.353217 4756 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 11:10:29 crc kubenswrapper[4756]: E1203 11:10:29.353252 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs podName:d88ee2e4-3954-487f-991e-b0f3e66b176a nodeName:}" failed. No retries permitted until 2025-12-03 11:10:29.853243672 +0000 UTC m=+1040.883244916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs") pod "openstack-operator-controller-manager-6c5c989645-dsrph" (UID: "d88ee2e4-3954-487f-991e-b0f3e66b176a") : secret "webhook-server-cert" not found Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.386419 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwv9b\" (UniqueName: \"kubernetes.io/projected/d88ee2e4-3954-487f-991e-b0f3e66b176a-kube-api-access-fwv9b\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.444055 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-vqlmr"] Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.457375 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4\" (UID: \"02b366f7-138d-4a78-9772-8e22db219753\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.457524 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzlqm\" (UniqueName: \"kubernetes.io/projected/0b3df789-34ca-4ab2-8fb7-a8fee4df46a7-kube-api-access-lzlqm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fkx8p\" (UID: \"0b3df789-34ca-4ab2-8fb7-a8fee4df46a7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p" Dec 03 11:10:29 crc kubenswrapper[4756]: E1203 11:10:29.457903 4756 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 11:10:29 crc kubenswrapper[4756]: E1203 11:10:29.458050 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert podName:02b366f7-138d-4a78-9772-8e22db219753 nodeName:}" failed. No retries permitted until 2025-12-03 11:10:30.458004862 +0000 UTC m=+1041.488006106 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" (UID: "02b366f7-138d-4a78-9772-8e22db219753") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.469397 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8"] Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.487346 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.487896 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzlqm\" (UniqueName: \"kubernetes.io/projected/0b3df789-34ca-4ab2-8fb7-a8fee4df46a7-kube-api-access-lzlqm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fkx8p\" (UID: \"0b3df789-34ca-4ab2-8fb7-a8fee4df46a7\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p" Dec 03 11:10:29 crc kubenswrapper[4756]: W1203 11:10:29.516501 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02ff397c_db50_4b3b_be2c_b43dcd1c71db.slice/crio-9b1b87ab16397728cf3aea39bc1260cfeeb5643cca38bddc523924c3e9249b6a WatchSource:0}: Error finding container 9b1b87ab16397728cf3aea39bc1260cfeeb5643cca38bddc523924c3e9249b6a: Status 404 returned error can't find the container with id 9b1b87ab16397728cf3aea39bc1260cfeeb5643cca38bddc523924c3e9249b6a Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.557129 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.590479 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8" event={"ID":"02ff397c-db50-4b3b-be2c-b43dcd1c71db","Type":"ContainerStarted","Data":"9b1b87ab16397728cf3aea39bc1260cfeeb5643cca38bddc523924c3e9249b6a"} Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.600944 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-crd8q"] Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.613241 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-vqlmr" event={"ID":"22eded41-fc81-4a9c-b831-8cfb8d339258","Type":"ContainerStarted","Data":"27e9aa52a4009608ff8c3989219b66105fb3803965fa7644d75848bb5791b8b8"} Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.622085 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-62955"] Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.632873 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-q5klh"] Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.865992 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert\") pod \"infra-operator-controller-manager-57548d458d-hpd8p\" (UID: \"0df6dfa3-00de-4da1-a132-358b5f6a66e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.866145 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:29 crc kubenswrapper[4756]: I1203 11:10:29.866193 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:29 crc kubenswrapper[4756]: E1203 11:10:29.866495 4756 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 11:10:29 crc kubenswrapper[4756]: E1203 11:10:29.866592 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs podName:d88ee2e4-3954-487f-991e-b0f3e66b176a nodeName:}" failed. No retries permitted until 2025-12-03 11:10:30.866569916 +0000 UTC m=+1041.896571160 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs") pod "openstack-operator-controller-manager-6c5c989645-dsrph" (UID: "d88ee2e4-3954-487f-991e-b0f3e66b176a") : secret "webhook-server-cert" not found Dec 03 11:10:29 crc kubenswrapper[4756]: E1203 11:10:29.867217 4756 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 11:10:29 crc kubenswrapper[4756]: E1203 11:10:29.867270 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert podName:0df6dfa3-00de-4da1-a132-358b5f6a66e9 nodeName:}" failed. No retries permitted until 2025-12-03 11:10:31.867241667 +0000 UTC m=+1042.897242911 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert") pod "infra-operator-controller-manager-57548d458d-hpd8p" (UID: "0df6dfa3-00de-4da1-a132-358b5f6a66e9") : secret "infra-operator-webhook-server-cert" not found Dec 03 11:10:29 crc kubenswrapper[4756]: E1203 11:10:29.867318 4756 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 11:10:29 crc kubenswrapper[4756]: E1203 11:10:29.867369 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs podName:d88ee2e4-3954-487f-991e-b0f3e66b176a nodeName:}" failed. No retries permitted until 2025-12-03 11:10:30.867359081 +0000 UTC m=+1041.897360325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs") pod "openstack-operator-controller-manager-6c5c989645-dsrph" (UID: "d88ee2e4-3954-487f-991e-b0f3e66b176a") : secret "metrics-server-cert" not found Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.115643 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-hc7n7"] Dec 03 11:10:30 crc kubenswrapper[4756]: W1203 11:10:30.137638 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod786fcded_8103_4691_b8a4_fa6ef5b79ee6.slice/crio-21ac546311b9b8258b398fafe80510c82c3f60046d72be84023d231ff368ccbb WatchSource:0}: Error finding container 21ac546311b9b8258b398fafe80510c82c3f60046d72be84023d231ff368ccbb: Status 404 returned error can't find the container with id 21ac546311b9b8258b398fafe80510c82c3f60046d72be84023d231ff368ccbb Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.148765 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gqmt7"] Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.181395 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p"] Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.399798 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2kq6w"] Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.411151 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-dk746"] Dec 03 11:10:30 crc kubenswrapper[4756]: W1203 11:10:30.413311 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod419295e1_b487_4701_9fc8_0273a49277dc.slice/crio-b582fbb34f09657419805e17fb593e9af01fab84b830afb4084ffbf4af7a601b WatchSource:0}: Error finding container b582fbb34f09657419805e17fb593e9af01fab84b830afb4084ffbf4af7a601b: Status 404 returned error can't find the container with id b582fbb34f09657419805e17fb593e9af01fab84b830afb4084ffbf4af7a601b Dec 03 11:10:30 crc kubenswrapper[4756]: W1203 11:10:30.417269 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0606b8cc_309a_4759_82d9_989ef224169b.slice/crio-709a2525f7bf1c562a10fd5d1cbd87b5839e6468e96ee5688522c9c5ccbfff6c WatchSource:0}: Error finding container 709a2525f7bf1c562a10fd5d1cbd87b5839e6468e96ee5688522c9c5ccbfff6c: Status 404 returned error can't find the container with id 709a2525f7bf1c562a10fd5d1cbd87b5839e6468e96ee5688522c9c5ccbfff6c Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.494407 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4\" (UID: \"02b366f7-138d-4a78-9772-8e22db219753\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:10:30 crc kubenswrapper[4756]: E1203 11:10:30.495070 4756 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 11:10:30 crc kubenswrapper[4756]: E1203 11:10:30.495230 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert podName:02b366f7-138d-4a78-9772-8e22db219753 nodeName:}" failed. No retries permitted until 2025-12-03 11:10:32.495203578 +0000 UTC m=+1043.525204822 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" (UID: "02b366f7-138d-4a78-9772-8e22db219753") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.663749 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p" event={"ID":"ed53f929-2cac-4053-8875-ad53414156c1","Type":"ContainerStarted","Data":"4e2c02322c4d3d4e9b1bd201b0dfc972577bdd269ab168a787d0f6a5b9572a02"} Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.665597 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-62955" event={"ID":"f651f26f-55f4-47cb-a318-0e2a9512f194","Type":"ContainerStarted","Data":"0137d94fcf2cb5233d8c4092c3c1a6d14caf63a030e27dc51190b933d1f185dd"} Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.675246 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-dk746" event={"ID":"419295e1-b487-4701-9fc8-0273a49277dc","Type":"ContainerStarted","Data":"b582fbb34f09657419805e17fb593e9af01fab84b830afb4084ffbf4af7a601b"} Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.690535 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hc7n7" event={"ID":"786fcded-8103-4691-b8a4-fa6ef5b79ee6","Type":"ContainerStarted","Data":"21ac546311b9b8258b398fafe80510c82c3f60046d72be84023d231ff368ccbb"} Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.715715 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gqmt7" event={"ID":"0f3ba133-8575-4603-ad87-77502244b892","Type":"ContainerStarted","Data":"22e2e9a3ae8820754b4d5dcb20b47c99e46f6cb697bff08e4c4001af6495de4b"} Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.723775 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-crd8q" event={"ID":"d605c327-c0d8-4466-b135-a1c8c777b91c","Type":"ContainerStarted","Data":"be7b16349de6f4dd961fdeeec909dc7a412a6e536763bd8c69309e6b43575a31"} Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.724352 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc"] Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.725215 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2kq6w" event={"ID":"0606b8cc-309a-4759-82d9-989ef224169b","Type":"ContainerStarted","Data":"709a2525f7bf1c562a10fd5d1cbd87b5839e6468e96ee5688522c9c5ccbfff6c"} Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.737095 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q5klh" event={"ID":"77626dde-3586-4b33-b4a4-326bab5bfe19","Type":"ContainerStarted","Data":"44ae84d74c1bf4fe2ef8acbf1523352f2f1d9e8cdb27dec2fbd7cbc8ed2890bb"} Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.776113 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-srv8g"] Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.789820 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5"] Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.882612 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-g6zjw"] Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.916446 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.916486 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:30 crc kubenswrapper[4756]: E1203 11:10:30.916602 4756 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 11:10:30 crc kubenswrapper[4756]: E1203 11:10:30.916641 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs podName:d88ee2e4-3954-487f-991e-b0f3e66b176a nodeName:}" failed. No retries permitted until 2025-12-03 11:10:32.916627553 +0000 UTC m=+1043.946628797 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs") pod "openstack-operator-controller-manager-6c5c989645-dsrph" (UID: "d88ee2e4-3954-487f-991e-b0f3e66b176a") : secret "webhook-server-cert" not found Dec 03 11:10:30 crc kubenswrapper[4756]: E1203 11:10:30.916942 4756 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 11:10:30 crc kubenswrapper[4756]: E1203 11:10:30.916982 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs podName:d88ee2e4-3954-487f-991e-b0f3e66b176a nodeName:}" failed. No retries permitted until 2025-12-03 11:10:32.916973434 +0000 UTC m=+1043.946974678 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs") pod "openstack-operator-controller-manager-6c5c989645-dsrph" (UID: "d88ee2e4-3954-487f-991e-b0f3e66b176a") : secret "metrics-server-cert" not found Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.920754 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn"] Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.933873 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw"] Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.947074 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-wlfff"] Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.953560 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn"] Dec 03 11:10:30 crc kubenswrapper[4756]: I1203 11:10:30.988001 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz"] Dec 03 11:10:30 crc kubenswrapper[4756]: W1203 11:10:30.995630 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef02c569_7cc5_43a3_a4e9_d8c97cc07465.slice/crio-cbc4491d077aa9fe27e8b23d4c6830c1d6e11eb9c28247e23f0827cd6bc38d31 WatchSource:0}: Error finding container cbc4491d077aa9fe27e8b23d4c6830c1d6e11eb9c28247e23f0827cd6bc38d31: Status 404 returned error can't find the container with id cbc4491d077aa9fe27e8b23d4c6830c1d6e11eb9c28247e23f0827cd6bc38d31 Dec 03 11:10:31 crc kubenswrapper[4756]: W1203 11:10:31.003236 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce02ca0e_abd9_4a57_a68d_7d35f304f8fa.slice/crio-611c129ea311943b2a306949ca225266b8e1882c5af1d2786e4c9db885fabcc7 WatchSource:0}: Error finding container 611c129ea311943b2a306949ca225266b8e1882c5af1d2786e4c9db885fabcc7: Status 404 returned error can't find the container with id 611c129ea311943b2a306949ca225266b8e1882c5af1d2786e4c9db885fabcc7 Dec 03 11:10:31 crc kubenswrapper[4756]: W1203 11:10:31.007146 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8df208a_e66b_4532_bb65_8c673f2659bc.slice/crio-55fff7f2b9ed9478fce6a5b2a0728960d55bcf591b5fbcb1d84a8c75c436ebdb WatchSource:0}: Error finding container 55fff7f2b9ed9478fce6a5b2a0728960d55bcf591b5fbcb1d84a8c75c436ebdb: Status 404 returned error can't find the container with id 55fff7f2b9ed9478fce6a5b2a0728960d55bcf591b5fbcb1d84a8c75c436ebdb Dec 03 11:10:31 crc kubenswrapper[4756]: E1203 11:10:31.014028 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bkh5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-bbqwn_openstack-operators(ce02ca0e-abd9-4a57-a68d-7d35f304f8fa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 11:10:31 crc kubenswrapper[4756]: E1203 11:10:31.016089 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qqvbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-v4kqz_openstack-operators(e8df208a-e66b-4532-bb65-8c673f2659bc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 11:10:31 crc kubenswrapper[4756]: E1203 11:10:31.018511 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bkh5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-bbqwn_openstack-operators(ce02ca0e-abd9-4a57-a68d-7d35f304f8fa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 11:10:31 crc kubenswrapper[4756]: E1203 11:10:31.021356 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn" podUID="ce02ca0e-abd9-4a57-a68d-7d35f304f8fa" Dec 03 11:10:31 crc kubenswrapper[4756]: E1203 11:10:31.023005 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qqvbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-v4kqz_openstack-operators(e8df208a-e66b-4532-bb65-8c673f2659bc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 11:10:31 crc kubenswrapper[4756]: E1203 11:10:31.024755 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz" podUID="e8df208a-e66b-4532-bb65-8c673f2659bc" Dec 03 11:10:31 crc kubenswrapper[4756]: I1203 11:10:31.064895 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p"] Dec 03 11:10:31 crc kubenswrapper[4756]: E1203 11:10:31.082209 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lzlqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-fkx8p_openstack-operators(0b3df789-34ca-4ab2-8fb7-a8fee4df46a7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 11:10:31 crc kubenswrapper[4756]: E1203 11:10:31.083489 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p" podUID="0b3df789-34ca-4ab2-8fb7-a8fee4df46a7" Dec 03 11:10:31 crc kubenswrapper[4756]: I1203 11:10:31.275407 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24f50f1e-1793-40ca-b105-31910761c4ed" path="/var/lib/kubelet/pods/24f50f1e-1793-40ca-b105-31910761c4ed/volumes" Dec 03 11:10:31 crc kubenswrapper[4756]: I1203 11:10:31.751850 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-wlfff" event={"ID":"52cfac1b-a56f-4189-b9f0-5c4a8acf2069","Type":"ContainerStarted","Data":"e14248db897e7116a97ab5b6e20e3445ce2cbc3ba34cade79d0dd408a0816ab6"} Dec 03 11:10:31 crc kubenswrapper[4756]: I1203 11:10:31.754307 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p" event={"ID":"0b3df789-34ca-4ab2-8fb7-a8fee4df46a7","Type":"ContainerStarted","Data":"bf544c7488d7e223a873ce71f0edbbc8ed0ea3d7d74ca760d415d80bb4dc652b"} Dec 03 11:10:31 crc kubenswrapper[4756]: E1203 11:10:31.757640 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p" podUID="0b3df789-34ca-4ab2-8fb7-a8fee4df46a7" Dec 03 11:10:31 crc kubenswrapper[4756]: I1203 11:10:31.759678 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-srv8g" event={"ID":"a5dd2be8-1335-43f1-83af-2a0efabcce1e","Type":"ContainerStarted","Data":"c5efa7820e6d71c758f9680ffdb513cca1976475cde156de0467c69925b64f4e"} Dec 03 11:10:31 crc kubenswrapper[4756]: I1203 11:10:31.762555 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz" event={"ID":"e8df208a-e66b-4532-bb65-8c673f2659bc","Type":"ContainerStarted","Data":"55fff7f2b9ed9478fce6a5b2a0728960d55bcf591b5fbcb1d84a8c75c436ebdb"} Dec 03 11:10:31 crc kubenswrapper[4756]: I1203 11:10:31.764736 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn" event={"ID":"ef02c569-7cc5-43a3-a4e9-d8c97cc07465","Type":"ContainerStarted","Data":"cbc4491d077aa9fe27e8b23d4c6830c1d6e11eb9c28247e23f0827cd6bc38d31"} Dec 03 11:10:31 crc kubenswrapper[4756]: E1203 11:10:31.768132 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz" podUID="e8df208a-e66b-4532-bb65-8c673f2659bc" Dec 03 11:10:31 crc kubenswrapper[4756]: I1203 11:10:31.773943 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5" event={"ID":"6c2df820-90b9-48fe-8dd0-8731028d0dbd","Type":"ContainerStarted","Data":"2e29a333cbefab4cf2ade391df78100644a3ddf0830dd08694b7de83abf8269c"} Dec 03 11:10:31 crc kubenswrapper[4756]: I1203 11:10:31.782172 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn" event={"ID":"ce02ca0e-abd9-4a57-a68d-7d35f304f8fa","Type":"ContainerStarted","Data":"611c129ea311943b2a306949ca225266b8e1882c5af1d2786e4c9db885fabcc7"} Dec 03 11:10:31 crc kubenswrapper[4756]: E1203 11:10:31.788069 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn" podUID="ce02ca0e-abd9-4a57-a68d-7d35f304f8fa" Dec 03 11:10:31 crc kubenswrapper[4756]: I1203 11:10:31.790061 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc" event={"ID":"ded35123-da5e-4a26-9caf-61d6c9d920cd","Type":"ContainerStarted","Data":"ca2a12f5b12537dbe7891075632031189c77a2e76018f983556be1dea2d3c6be"} Dec 03 11:10:31 crc kubenswrapper[4756]: I1203 11:10:31.798301 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw" event={"ID":"53d098b0-4833-4b74-b13d-57a4d9c5ee13","Type":"ContainerStarted","Data":"95e45fe21b73c04dbc229511671b39f240264440fa8826696627d9fad26b24b0"} Dec 03 11:10:31 crc kubenswrapper[4756]: I1203 11:10:31.803259 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-g6zjw" event={"ID":"77043fde-1e4d-4590-b97c-3de89953581a","Type":"ContainerStarted","Data":"46257c31fd8b91a602d7018ef8955c8f567dc931d8ada1bc319b349643540306"} Dec 03 11:10:31 crc kubenswrapper[4756]: I1203 11:10:31.957066 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert\") pod \"infra-operator-controller-manager-57548d458d-hpd8p\" (UID: \"0df6dfa3-00de-4da1-a132-358b5f6a66e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:10:31 crc kubenswrapper[4756]: E1203 11:10:31.957817 4756 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 11:10:31 crc kubenswrapper[4756]: E1203 11:10:31.957946 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert podName:0df6dfa3-00de-4da1-a132-358b5f6a66e9 nodeName:}" failed. No retries permitted until 2025-12-03 11:10:35.957910546 +0000 UTC m=+1046.987911790 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert") pod "infra-operator-controller-manager-57548d458d-hpd8p" (UID: "0df6dfa3-00de-4da1-a132-358b5f6a66e9") : secret "infra-operator-webhook-server-cert" not found Dec 03 11:10:32 crc kubenswrapper[4756]: I1203 11:10:32.567225 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4\" (UID: \"02b366f7-138d-4a78-9772-8e22db219753\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:10:32 crc kubenswrapper[4756]: E1203 11:10:32.567488 4756 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 11:10:32 crc kubenswrapper[4756]: E1203 11:10:32.567880 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert podName:02b366f7-138d-4a78-9772-8e22db219753 nodeName:}" failed. No retries permitted until 2025-12-03 11:10:36.567860075 +0000 UTC m=+1047.597861319 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" (UID: "02b366f7-138d-4a78-9772-8e22db219753") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 11:10:32 crc kubenswrapper[4756]: E1203 11:10:32.856758 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p" podUID="0b3df789-34ca-4ab2-8fb7-a8fee4df46a7" Dec 03 11:10:32 crc kubenswrapper[4756]: E1203 11:10:32.858339 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz" podUID="e8df208a-e66b-4532-bb65-8c673f2659bc" Dec 03 11:10:32 crc kubenswrapper[4756]: E1203 11:10:32.858767 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn" podUID="ce02ca0e-abd9-4a57-a68d-7d35f304f8fa" Dec 03 11:10:32 crc kubenswrapper[4756]: I1203 11:10:32.977659 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:32 crc kubenswrapper[4756]: I1203 11:10:32.977711 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:32 crc kubenswrapper[4756]: E1203 11:10:32.977922 4756 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 11:10:32 crc kubenswrapper[4756]: E1203 11:10:32.978001 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs podName:d88ee2e4-3954-487f-991e-b0f3e66b176a nodeName:}" failed. No retries permitted until 2025-12-03 11:10:36.977981327 +0000 UTC m=+1048.007982571 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs") pod "openstack-operator-controller-manager-6c5c989645-dsrph" (UID: "d88ee2e4-3954-487f-991e-b0f3e66b176a") : secret "webhook-server-cert" not found Dec 03 11:10:32 crc kubenswrapper[4756]: E1203 11:10:32.978533 4756 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 11:10:32 crc kubenswrapper[4756]: E1203 11:10:32.978811 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs podName:d88ee2e4-3954-487f-991e-b0f3e66b176a nodeName:}" failed. No retries permitted until 2025-12-03 11:10:36.978626397 +0000 UTC m=+1048.008627821 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs") pod "openstack-operator-controller-manager-6c5c989645-dsrph" (UID: "d88ee2e4-3954-487f-991e-b0f3e66b176a") : secret "metrics-server-cert" not found Dec 03 11:10:35 crc kubenswrapper[4756]: I1203 11:10:35.983023 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert\") pod \"infra-operator-controller-manager-57548d458d-hpd8p\" (UID: \"0df6dfa3-00de-4da1-a132-358b5f6a66e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:10:35 crc kubenswrapper[4756]: E1203 11:10:35.983378 4756 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 11:10:35 crc kubenswrapper[4756]: E1203 11:10:35.983678 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert podName:0df6dfa3-00de-4da1-a132-358b5f6a66e9 nodeName:}" failed. No retries permitted until 2025-12-03 11:10:43.983646907 +0000 UTC m=+1055.013648161 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert") pod "infra-operator-controller-manager-57548d458d-hpd8p" (UID: "0df6dfa3-00de-4da1-a132-358b5f6a66e9") : secret "infra-operator-webhook-server-cert" not found Dec 03 11:10:36 crc kubenswrapper[4756]: I1203 11:10:36.591636 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4\" (UID: \"02b366f7-138d-4a78-9772-8e22db219753\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:10:36 crc kubenswrapper[4756]: E1203 11:10:36.592003 4756 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 11:10:36 crc kubenswrapper[4756]: E1203 11:10:36.592147 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert podName:02b366f7-138d-4a78-9772-8e22db219753 nodeName:}" failed. No retries permitted until 2025-12-03 11:10:44.59210086 +0000 UTC m=+1055.622102294 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" (UID: "02b366f7-138d-4a78-9772-8e22db219753") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 11:10:37 crc kubenswrapper[4756]: I1203 11:10:36.999493 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:37 crc kubenswrapper[4756]: I1203 11:10:36.999553 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:37 crc kubenswrapper[4756]: E1203 11:10:36.999730 4756 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 11:10:37 crc kubenswrapper[4756]: E1203 11:10:36.999745 4756 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 11:10:37 crc kubenswrapper[4756]: E1203 11:10:36.999796 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs podName:d88ee2e4-3954-487f-991e-b0f3e66b176a nodeName:}" failed. No retries permitted until 2025-12-03 11:10:44.999776865 +0000 UTC m=+1056.029778109 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs") pod "openstack-operator-controller-manager-6c5c989645-dsrph" (UID: "d88ee2e4-3954-487f-991e-b0f3e66b176a") : secret "webhook-server-cert" not found Dec 03 11:10:37 crc kubenswrapper[4756]: E1203 11:10:36.999841 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs podName:d88ee2e4-3954-487f-991e-b0f3e66b176a nodeName:}" failed. No retries permitted until 2025-12-03 11:10:44.999817117 +0000 UTC m=+1056.029818541 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs") pod "openstack-operator-controller-manager-6c5c989645-dsrph" (UID: "d88ee2e4-3954-487f-991e-b0f3e66b176a") : secret "metrics-server-cert" not found Dec 03 11:10:44 crc kubenswrapper[4756]: I1203 11:10:44.022148 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert\") pod \"infra-operator-controller-manager-57548d458d-hpd8p\" (UID: \"0df6dfa3-00de-4da1-a132-358b5f6a66e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:10:44 crc kubenswrapper[4756]: E1203 11:10:44.022478 4756 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 11:10:44 crc kubenswrapper[4756]: E1203 11:10:44.022819 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert podName:0df6dfa3-00de-4da1-a132-358b5f6a66e9 nodeName:}" failed. No retries permitted until 2025-12-03 11:11:00.022765245 +0000 UTC m=+1071.052766529 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert") pod "infra-operator-controller-manager-57548d458d-hpd8p" (UID: "0df6dfa3-00de-4da1-a132-358b5f6a66e9") : secret "infra-operator-webhook-server-cert" not found Dec 03 11:10:44 crc kubenswrapper[4756]: I1203 11:10:44.631694 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4\" (UID: \"02b366f7-138d-4a78-9772-8e22db219753\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:10:44 crc kubenswrapper[4756]: I1203 11:10:44.639075 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02b366f7-138d-4a78-9772-8e22db219753-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4\" (UID: \"02b366f7-138d-4a78-9772-8e22db219753\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:10:44 crc kubenswrapper[4756]: I1203 11:10:44.921445 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:10:45 crc kubenswrapper[4756]: I1203 11:10:45.038998 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:45 crc kubenswrapper[4756]: I1203 11:10:45.039087 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:45 crc kubenswrapper[4756]: I1203 11:10:45.052082 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-webhook-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:45 crc kubenswrapper[4756]: I1203 11:10:45.052408 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d88ee2e4-3954-487f-991e-b0f3e66b176a-metrics-certs\") pod \"openstack-operator-controller-manager-6c5c989645-dsrph\" (UID: \"d88ee2e4-3954-487f-991e-b0f3e66b176a\") " pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:45 crc kubenswrapper[4756]: I1203 11:10:45.135903 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:10:45 crc kubenswrapper[4756]: E1203 11:10:45.572619 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" Dec 03 11:10:45 crc kubenswrapper[4756]: E1203 11:10:45.572829 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vtwdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-9t6v5_openstack-operators(6c2df820-90b9-48fe-8dd0-8731028d0dbd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:10:46 crc kubenswrapper[4756]: E1203 11:10:46.896458 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" Dec 03 11:10:46 crc kubenswrapper[4756]: E1203 11:10:46.897101 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbxtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-62955_openstack-operators(f651f26f-55f4-47cb-a318-0e2a9512f194): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:10:56 crc kubenswrapper[4756]: E1203 11:10:56.274504 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 03 11:10:56 crc kubenswrapper[4756]: E1203 11:10:56.275556 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qtjlb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-wlfff_openstack-operators(52cfac1b-a56f-4189-b9f0-5c4a8acf2069): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:10:56 crc kubenswrapper[4756]: I1203 11:10:56.278238 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 11:10:57 crc kubenswrapper[4756]: E1203 11:10:57.638330 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\": context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 03 11:10:57 crc kubenswrapper[4756]: E1203 11:10:57.638660 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lzlqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-fkx8p_openstack-operators(0b3df789-34ca-4ab2-8fb7-a8fee4df46a7): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\": context canceled" logger="UnhandledError" Dec 03 11:10:57 crc kubenswrapper[4756]: E1203 11:10:57.639909 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \\\"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\\\": context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p" podUID="0b3df789-34ca-4ab2-8fb7-a8fee4df46a7" Dec 03 11:10:57 crc kubenswrapper[4756]: E1203 11:10:57.657373 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 03 11:10:57 crc kubenswrapper[4756]: E1203 11:10:57.657693 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8l56s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-9r6sc_openstack-operators(ded35123-da5e-4a26-9caf-61d6c9d920cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:10:58 crc kubenswrapper[4756]: E1203 11:10:58.591725 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 03 11:10:58 crc kubenswrapper[4756]: E1203 11:10:58.592697 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cb5kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-xdlgn_openstack-operators(ef02c569-7cc5-43a3-a4e9-d8c97cc07465): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:10:59 crc kubenswrapper[4756]: E1203 11:10:59.653232 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Dec 03 11:10:59 crc kubenswrapper[4756]: E1203 11:10:59.653651 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bwcdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-cg2g8_openstack-operators(02ff397c-db50-4b3b-be2c-b43dcd1c71db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:11:00 crc kubenswrapper[4756]: I1203 11:11:00.035353 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert\") pod \"infra-operator-controller-manager-57548d458d-hpd8p\" (UID: \"0df6dfa3-00de-4da1-a132-358b5f6a66e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:11:00 crc kubenswrapper[4756]: I1203 11:11:00.064383 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df6dfa3-00de-4da1-a132-358b5f6a66e9-cert\") pod \"infra-operator-controller-manager-57548d458d-hpd8p\" (UID: \"0df6dfa3-00de-4da1-a132-358b5f6a66e9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:11:00 crc kubenswrapper[4756]: E1203 11:11:00.233132 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 03 11:11:00 crc kubenswrapper[4756]: E1203 11:11:00.233399 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6dxff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-cxgxw_openstack-operators(53d098b0-4833-4b74-b13d-57a4d9c5ee13): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:11:00 crc kubenswrapper[4756]: I1203 11:11:00.323914 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:11:00 crc kubenswrapper[4756]: E1203 11:11:00.736222 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 03 11:11:00 crc kubenswrapper[4756]: E1203 11:11:00.736445 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7gtnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-bgw6p_openstack-operators(ed53f929-2cac-4053-8875-ad53414156c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:11:02 crc kubenswrapper[4756]: I1203 11:11:02.597572 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4"] Dec 03 11:11:02 crc kubenswrapper[4756]: I1203 11:11:02.808418 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph"] Dec 03 11:11:03 crc kubenswrapper[4756]: W1203 11:11:03.047072 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd88ee2e4_3954_487f_991e_b0f3e66b176a.slice/crio-7172c4f4ed515675878aeb60af4a452a59469e1edd0b1c60d1ed152aa9108762 WatchSource:0}: Error finding container 7172c4f4ed515675878aeb60af4a452a59469e1edd0b1c60d1ed152aa9108762: Status 404 returned error can't find the container with id 7172c4f4ed515675878aeb60af4a452a59469e1edd0b1c60d1ed152aa9108762 Dec 03 11:11:03 crc kubenswrapper[4756]: I1203 11:11:03.087559 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p"] Dec 03 11:11:03 crc kubenswrapper[4756]: W1203 11:11:03.198236 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0df6dfa3_00de_4da1_a132_358b5f6a66e9.slice/crio-3913eb63e9604bcd48ec98fe77ff00fd45c6df9b0d873e2f4bc4a6915f87ff7d WatchSource:0}: Error finding container 3913eb63e9604bcd48ec98fe77ff00fd45c6df9b0d873e2f4bc4a6915f87ff7d: Status 404 returned error can't find the container with id 3913eb63e9604bcd48ec98fe77ff00fd45c6df9b0d873e2f4bc4a6915f87ff7d Dec 03 11:11:04 crc kubenswrapper[4756]: I1203 11:11:04.085530 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-crd8q" event={"ID":"d605c327-c0d8-4466-b135-a1c8c777b91c","Type":"ContainerStarted","Data":"66e7fb297a3fbfdc691c6d14a8c987034b7e6dbfd275feb3a8580492d27d55cd"} Dec 03 11:11:04 crc kubenswrapper[4756]: I1203 11:11:04.088721 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" event={"ID":"d88ee2e4-3954-487f-991e-b0f3e66b176a","Type":"ContainerStarted","Data":"7172c4f4ed515675878aeb60af4a452a59469e1edd0b1c60d1ed152aa9108762"} Dec 03 11:11:04 crc kubenswrapper[4756]: I1203 11:11:04.118944 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hc7n7" event={"ID":"786fcded-8103-4691-b8a4-fa6ef5b79ee6","Type":"ContainerStarted","Data":"99482c2a61b64fe4b3f1d50ca285120e7194cd98adf579e22f00f16ebac9d16b"} Dec 03 11:11:04 crc kubenswrapper[4756]: I1203 11:11:04.120929 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-g6zjw" event={"ID":"77043fde-1e4d-4590-b97c-3de89953581a","Type":"ContainerStarted","Data":"1fe53e8b3d99552e14b563dbd23cb20936e2d4f5b31933902e8592620ecd0de3"} Dec 03 11:11:04 crc kubenswrapper[4756]: I1203 11:11:04.122086 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz" event={"ID":"e8df208a-e66b-4532-bb65-8c673f2659bc","Type":"ContainerStarted","Data":"238a9227824596da429009c78cf229c31daf6fa96f0752ee930927e99a3f6d19"} Dec 03 11:11:04 crc kubenswrapper[4756]: I1203 11:11:04.122803 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" event={"ID":"0df6dfa3-00de-4da1-a132-358b5f6a66e9","Type":"ContainerStarted","Data":"3913eb63e9604bcd48ec98fe77ff00fd45c6df9b0d873e2f4bc4a6915f87ff7d"} Dec 03 11:11:04 crc kubenswrapper[4756]: I1203 11:11:04.123988 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-vqlmr" event={"ID":"22eded41-fc81-4a9c-b831-8cfb8d339258","Type":"ContainerStarted","Data":"ce60095e05d3874fb9c95801a376e964cefa66bc56786abc0cebdce71ab5fb03"} Dec 03 11:11:04 crc kubenswrapper[4756]: I1203 11:11:04.125508 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2kq6w" event={"ID":"0606b8cc-309a-4759-82d9-989ef224169b","Type":"ContainerStarted","Data":"123e2c5884af3775594b7319238acb41a31dea08625b9c17dbbeeb38957c1bf7"} Dec 03 11:11:04 crc kubenswrapper[4756]: I1203 11:11:04.126738 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q5klh" event={"ID":"77626dde-3586-4b33-b4a4-326bab5bfe19","Type":"ContainerStarted","Data":"2e15f190f260993485d11d2fb3959da8e8e421b5dd34b972adb4fb76a4184d6c"} Dec 03 11:11:04 crc kubenswrapper[4756]: I1203 11:11:04.127927 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-srv8g" event={"ID":"a5dd2be8-1335-43f1-83af-2a0efabcce1e","Type":"ContainerStarted","Data":"4a7844d742ea3f76bf03249d0139e76886f35109c32032e0bd8dafdba9b6f602"} Dec 03 11:11:04 crc kubenswrapper[4756]: I1203 11:11:04.128820 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-dk746" event={"ID":"419295e1-b487-4701-9fc8-0273a49277dc","Type":"ContainerStarted","Data":"25b6c3b0598b0a6b799085a44667dc463402eba8b5f3a25690ad27e8f34530a8"} Dec 03 11:11:04 crc kubenswrapper[4756]: I1203 11:11:04.129717 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" event={"ID":"02b366f7-138d-4a78-9772-8e22db219753","Type":"ContainerStarted","Data":"97717d2c1de6b5148f76459483248125146a4fde4c1281119add77d7ad10a0be"} Dec 03 11:11:05 crc kubenswrapper[4756]: I1203 11:11:05.318973 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gqmt7" event={"ID":"0f3ba133-8575-4603-ad87-77502244b892","Type":"ContainerStarted","Data":"5e22eb3f9a6d6bbf3755ed78b857e4ab73eec2e129ce382ed502fcbfa88ee395"} Dec 03 11:11:06 crc kubenswrapper[4756]: I1203 11:11:06.334410 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" event={"ID":"d88ee2e4-3954-487f-991e-b0f3e66b176a","Type":"ContainerStarted","Data":"ebb703ae91a49b2c073ff1d9f6420901b069669d3b3fac585b2fca530c4079d3"} Dec 03 11:11:06 crc kubenswrapper[4756]: I1203 11:11:06.336370 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:11:09 crc kubenswrapper[4756]: I1203 11:11:09.412848 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" podStartSLOduration=41.412809803 podStartE2EDuration="41.412809803s" podCreationTimestamp="2025-12-03 11:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:11:06.391127592 +0000 UTC m=+1077.421128836" watchObservedRunningTime="2025-12-03 11:11:09.412809803 +0000 UTC m=+1080.442811047" Dec 03 11:11:09 crc kubenswrapper[4756]: I1203 11:11:09.492354 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn" event={"ID":"ce02ca0e-abd9-4a57-a68d-7d35f304f8fa","Type":"ContainerStarted","Data":"ab107bcd8b2c8b6f344e0c5b2da7f50a5235d043c5e958d2e4240146641b4d8d"} Dec 03 11:11:12 crc kubenswrapper[4756]: E1203 11:11:12.313580 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p" podUID="ed53f929-2cac-4053-8875-ad53414156c1" Dec 03 11:11:12 crc kubenswrapper[4756]: E1203 11:11:12.321556 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8" podUID="02ff397c-db50-4b3b-be2c-b43dcd1c71db" Dec 03 11:11:12 crc kubenswrapper[4756]: E1203 11:11:12.353328 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-wlfff" podUID="52cfac1b-a56f-4189-b9f0-5c4a8acf2069" Dec 03 11:11:12 crc kubenswrapper[4756]: E1203 11:11:12.400750 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5" podUID="6c2df820-90b9-48fe-8dd0-8731028d0dbd" Dec 03 11:11:12 crc kubenswrapper[4756]: E1203 11:11:12.438479 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw" podUID="53d098b0-4833-4b74-b13d-57a4d9c5ee13" Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.522719 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw" event={"ID":"53d098b0-4833-4b74-b13d-57a4d9c5ee13","Type":"ContainerStarted","Data":"31de889415b5acb2cbffb162432604987730eb852307d09a113c379a77365c69"} Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.555155 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8" event={"ID":"02ff397c-db50-4b3b-be2c-b43dcd1c71db","Type":"ContainerStarted","Data":"ec002c47621137738660d313f2c1cec2e7817972c7570e3d14b16886fbc72f96"} Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.571367 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q5klh" event={"ID":"77626dde-3586-4b33-b4a4-326bab5bfe19","Type":"ContainerStarted","Data":"e4352e5ca33bc4e32fd2720bd7457ed40d6d1c471f755634de6205f713d4a7e5"} Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.572754 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q5klh" Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.576554 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q5klh" Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.586764 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-wlfff" event={"ID":"52cfac1b-a56f-4189-b9f0-5c4a8acf2069","Type":"ContainerStarted","Data":"93b3c595a238b33e9b9c75dd01b60c5cfc030bdef7568ecd244d7e730158b6dc"} Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.607132 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" event={"ID":"0df6dfa3-00de-4da1-a132-358b5f6a66e9","Type":"ContainerStarted","Data":"966684124ba5cbe6e8eabcb048be66812d4e79143e2e7972e5c2fcf3f65f0bad"} Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.650558 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2kq6w" event={"ID":"0606b8cc-309a-4759-82d9-989ef224169b","Type":"ContainerStarted","Data":"8d995d5426ee9e96d8bc9a56ad50d5b811968811f9e1541627f3a84997db1bac"} Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.651862 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2kq6w" Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.655146 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-q5klh" podStartSLOduration=3.640479653 podStartE2EDuration="45.655108141s" podCreationTimestamp="2025-12-03 11:10:27 +0000 UTC" firstStartedPulling="2025-12-03 11:10:29.748564482 +0000 UTC m=+1040.778565726" lastFinishedPulling="2025-12-03 11:11:11.76319297 +0000 UTC m=+1082.793194214" observedRunningTime="2025-12-03 11:11:12.651201978 +0000 UTC m=+1083.681203222" watchObservedRunningTime="2025-12-03 11:11:12.655108141 +0000 UTC m=+1083.685109385" Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.657651 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2kq6w" Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.687073 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p" event={"ID":"ed53f929-2cac-4053-8875-ad53414156c1","Type":"ContainerStarted","Data":"9ba2da4512a03db5e9509138264530ad66a6f15707fbc7ad6f965f64ded90e76"} Dec 03 11:11:12 crc kubenswrapper[4756]: E1203 11:11:12.856131 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn" podUID="ef02c569-7cc5-43a3-a4e9-d8c97cc07465" Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.870464 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" event={"ID":"02b366f7-138d-4a78-9772-8e22db219753","Type":"ContainerStarted","Data":"c75db4fb66bf5961500a8ad7d07383777881f2836285a1ed9130d827f9b08922"} Dec 03 11:11:12 crc kubenswrapper[4756]: E1203 11:11:12.894070 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc" podUID="ded35123-da5e-4a26-9caf-61d6c9d920cd" Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.899744 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-2kq6w" podStartSLOduration=3.536913835 podStartE2EDuration="44.899713296s" podCreationTimestamp="2025-12-03 11:10:28 +0000 UTC" firstStartedPulling="2025-12-03 11:10:30.421647432 +0000 UTC m=+1041.451648676" lastFinishedPulling="2025-12-03 11:11:11.784446893 +0000 UTC m=+1082.814448137" observedRunningTime="2025-12-03 11:11:12.898732535 +0000 UTC m=+1083.928733799" watchObservedRunningTime="2025-12-03 11:11:12.899713296 +0000 UTC m=+1083.929714540" Dec 03 11:11:12 crc kubenswrapper[4756]: I1203 11:11:12.929560 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5" event={"ID":"6c2df820-90b9-48fe-8dd0-8731028d0dbd","Type":"ContainerStarted","Data":"6b83bac4539d8c2be801fc6d6e8a6b798650ea76496f7e00887b62c9c69bd341"} Dec 03 11:11:13 crc kubenswrapper[4756]: E1203 11:11:13.298204 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p" podUID="0b3df789-34ca-4ab2-8fb7-a8fee4df46a7" Dec 03 11:11:14 crc kubenswrapper[4756]: E1203 11:11:14.043715 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-62955" podUID="f651f26f-55f4-47cb-a318-0e2a9512f194" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.150963 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-crd8q" event={"ID":"d605c327-c0d8-4466-b135-a1c8c777b91c","Type":"ContainerStarted","Data":"4745d8446707d2d585e19d94196b1389b64df06bdc55e20fb5afeec477e3bb47"} Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.151805 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-crd8q" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.155321 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-crd8q" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.167700 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-srv8g" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.178567 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-crd8q" podStartSLOduration=5.050297159 podStartE2EDuration="47.178541174s" podCreationTimestamp="2025-12-03 11:10:27 +0000 UTC" firstStartedPulling="2025-12-03 11:10:29.675409198 +0000 UTC m=+1040.705410432" lastFinishedPulling="2025-12-03 11:11:11.803653203 +0000 UTC m=+1082.833654447" observedRunningTime="2025-12-03 11:11:14.175441167 +0000 UTC m=+1085.205442421" watchObservedRunningTime="2025-12-03 11:11:14.178541174 +0000 UTC m=+1085.208542418" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.186502 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn" event={"ID":"ef02c569-7cc5-43a3-a4e9-d8c97cc07465","Type":"ContainerStarted","Data":"d4f6e2269fe0cd61c86e0c631f4b96797396a1cce6b0984e08bd0fd4268616df"} Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.197492 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hc7n7" event={"ID":"786fcded-8103-4691-b8a4-fa6ef5b79ee6","Type":"ContainerStarted","Data":"cbb8a5b1d976743caa2967e364af860e719b150a8c56692b5e6a7f2dda9ea348"} Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.198432 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hc7n7" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.204467 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hc7n7" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.207491 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-srv8g" podStartSLOduration=5.307595485 podStartE2EDuration="46.207464676s" podCreationTimestamp="2025-12-03 11:10:28 +0000 UTC" firstStartedPulling="2025-12-03 11:10:30.926768909 +0000 UTC m=+1041.956770153" lastFinishedPulling="2025-12-03 11:11:11.8266381 +0000 UTC m=+1082.856639344" observedRunningTime="2025-12-03 11:11:14.205216996 +0000 UTC m=+1085.235218260" watchObservedRunningTime="2025-12-03 11:11:14.207464676 +0000 UTC m=+1085.237465920" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.242659 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-srv8g" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.247299 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gqmt7" event={"ID":"0f3ba133-8575-4603-ad87-77502244b892","Type":"ContainerStarted","Data":"ee9b751b1eb5d7e8eb2cf0ddec192ec45cf8f5af2c98b91f99aa5e0559f264b7"} Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.247733 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gqmt7" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.253597 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gqmt7" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.261202 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.273499 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-g6zjw" event={"ID":"77043fde-1e4d-4590-b97c-3de89953581a","Type":"ContainerStarted","Data":"4139dcec56bf36ec469da27ef6fe75b8b5aee764c14ff3bd9e9ef6fc35e58e02"} Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.274101 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-g6zjw" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.279290 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-g6zjw" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.280349 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz" event={"ID":"e8df208a-e66b-4532-bb65-8c673f2659bc","Type":"ContainerStarted","Data":"f4fef4ec728e7629e4deaab1617fcd4c006a642d725da530a199c91defc53c2c"} Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.280983 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.285153 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.286230 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" event={"ID":"0df6dfa3-00de-4da1-a132-358b5f6a66e9","Type":"ContainerStarted","Data":"1bce8e3853aa1b1339f18f591a20c90ab4567783a7a35cf091461b81da94dd78"} Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.286789 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.286892 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.302463 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-vqlmr" event={"ID":"22eded41-fc81-4a9c-b831-8cfb8d339258","Type":"ContainerStarted","Data":"f03d332527a84a65854e992122ca9068e17240733296ae3f9b235698c8b7698d"} Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.304200 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-vqlmr" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.306086 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-vqlmr" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.336423 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc" event={"ID":"ded35123-da5e-4a26-9caf-61d6c9d920cd","Type":"ContainerStarted","Data":"6b9602e6fd3460d129d4366e107fc1ad6f9dc7de09ceefb826b3808cbe87a581"} Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.357702 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hc7n7" podStartSLOduration=5.656433851 podStartE2EDuration="47.357671186s" podCreationTimestamp="2025-12-03 11:10:27 +0000 UTC" firstStartedPulling="2025-12-03 11:10:30.16048806 +0000 UTC m=+1041.190489314" lastFinishedPulling="2025-12-03 11:11:11.861725405 +0000 UTC m=+1082.891726649" observedRunningTime="2025-12-03 11:11:14.3443572 +0000 UTC m=+1085.374358464" watchObservedRunningTime="2025-12-03 11:11:14.357671186 +0000 UTC m=+1085.387672430" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.361944 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-dk746" event={"ID":"419295e1-b487-4701-9fc8-0273a49277dc","Type":"ContainerStarted","Data":"37fb17ca45e947bba2f63354747fbc83f75a58b99713922f5f827e351ca9766a"} Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.369120 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-dk746" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.369188 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-dk746" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.432348 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gqmt7" podStartSLOduration=5.818247701 podStartE2EDuration="47.432324316s" podCreationTimestamp="2025-12-03 11:10:27 +0000 UTC" firstStartedPulling="2025-12-03 11:10:30.198153145 +0000 UTC m=+1041.228154379" lastFinishedPulling="2025-12-03 11:11:11.81222975 +0000 UTC m=+1082.842230994" observedRunningTime="2025-12-03 11:11:14.43150707 +0000 UTC m=+1085.461508314" watchObservedRunningTime="2025-12-03 11:11:14.432324316 +0000 UTC m=+1085.462325560" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.515838 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn" podStartSLOduration=5.646420522 podStartE2EDuration="46.515816922s" podCreationTimestamp="2025-12-03 11:10:28 +0000 UTC" firstStartedPulling="2025-12-03 11:10:31.013835797 +0000 UTC m=+1042.043837041" lastFinishedPulling="2025-12-03 11:11:11.883232197 +0000 UTC m=+1082.913233441" observedRunningTime="2025-12-03 11:11:14.512169508 +0000 UTC m=+1085.542170752" watchObservedRunningTime="2025-12-03 11:11:14.515816922 +0000 UTC m=+1085.545818166" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.603213 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-g6zjw" podStartSLOduration=5.759218684 podStartE2EDuration="46.603184399s" podCreationTimestamp="2025-12-03 11:10:28 +0000 UTC" firstStartedPulling="2025-12-03 11:10:30.941745657 +0000 UTC m=+1041.971746901" lastFinishedPulling="2025-12-03 11:11:11.785711362 +0000 UTC m=+1082.815712616" observedRunningTime="2025-12-03 11:11:14.583506965 +0000 UTC m=+1085.613508209" watchObservedRunningTime="2025-12-03 11:11:14.603184399 +0000 UTC m=+1085.633185643" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.848874 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-vqlmr" podStartSLOduration=5.616694809 podStartE2EDuration="47.848847757s" podCreationTimestamp="2025-12-03 11:10:27 +0000 UTC" firstStartedPulling="2025-12-03 11:10:29.556054823 +0000 UTC m=+1040.586056067" lastFinishedPulling="2025-12-03 11:11:11.788207771 +0000 UTC m=+1082.818209015" observedRunningTime="2025-12-03 11:11:14.838979949 +0000 UTC m=+1085.868981203" watchObservedRunningTime="2025-12-03 11:11:14.848847757 +0000 UTC m=+1085.878849001" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.852230 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-v4kqz" podStartSLOduration=6.082093132 podStartE2EDuration="46.852222263s" podCreationTimestamp="2025-12-03 11:10:28 +0000 UTC" firstStartedPulling="2025-12-03 11:10:31.015737586 +0000 UTC m=+1042.045738820" lastFinishedPulling="2025-12-03 11:11:11.785866707 +0000 UTC m=+1082.815867951" observedRunningTime="2025-12-03 11:11:14.765205756 +0000 UTC m=+1085.795207000" watchObservedRunningTime="2025-12-03 11:11:14.852222263 +0000 UTC m=+1085.882223527" Dec 03 11:11:14 crc kubenswrapper[4756]: I1203 11:11:14.882664 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-dk746" podStartSLOduration=6.473609898 podStartE2EDuration="47.882628812s" podCreationTimestamp="2025-12-03 11:10:27 +0000 UTC" firstStartedPulling="2025-12-03 11:10:30.4161377 +0000 UTC m=+1041.446138954" lastFinishedPulling="2025-12-03 11:11:11.825156624 +0000 UTC m=+1082.855157868" observedRunningTime="2025-12-03 11:11:14.880628159 +0000 UTC m=+1085.910629403" watchObservedRunningTime="2025-12-03 11:11:14.882628812 +0000 UTC m=+1085.912630056" Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.091385 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" podStartSLOduration=39.669268915 podStartE2EDuration="48.082944865s" podCreationTimestamp="2025-12-03 11:10:27 +0000 UTC" firstStartedPulling="2025-12-03 11:11:03.20536274 +0000 UTC m=+1074.235363984" lastFinishedPulling="2025-12-03 11:11:11.61903869 +0000 UTC m=+1082.649039934" observedRunningTime="2025-12-03 11:11:15.079606681 +0000 UTC m=+1086.109607935" watchObservedRunningTime="2025-12-03 11:11:15.082944865 +0000 UTC m=+1086.112946119" Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.151429 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6c5c989645-dsrph" Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.392029 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" event={"ID":"02b366f7-138d-4a78-9772-8e22db219753","Type":"ContainerStarted","Data":"b5a69f5e3db6ca458472504b965828976c95c404fd1cad00033d3903eec8c4ff"} Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.393173 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.405423 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5" event={"ID":"6c2df820-90b9-48fe-8dd0-8731028d0dbd","Type":"ContainerStarted","Data":"cc1713fb36cf04c318c25540e60fe86abecf4302699fda1f989eb74d44cb2a05"} Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.406151 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5" Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.409972 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-bbqwn" event={"ID":"ce02ca0e-abd9-4a57-a68d-7d35f304f8fa","Type":"ContainerStarted","Data":"a4bddc396449d76b5d63619f0ba6f8b07c2febd6de5fcde828589178eeb9a1e2"} Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.419296 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-62955" event={"ID":"f651f26f-55f4-47cb-a318-0e2a9512f194","Type":"ContainerStarted","Data":"7fdc99a714fb9126343cf81e80516763c107c7251764c4eccf34899da5d9917d"} Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.426306 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p" event={"ID":"ed53f929-2cac-4053-8875-ad53414156c1","Type":"ContainerStarted","Data":"5ec429813ab43b6e5abd8c728e1bbeefc8381349e7f62e043aab10b6263be696"} Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.430115 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p" Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.438413 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" podStartSLOduration=38.917904467 podStartE2EDuration="47.43838917s" podCreationTimestamp="2025-12-03 11:10:28 +0000 UTC" firstStartedPulling="2025-12-03 11:11:03.061311404 +0000 UTC m=+1074.091312648" lastFinishedPulling="2025-12-03 11:11:11.581796107 +0000 UTC m=+1082.611797351" observedRunningTime="2025-12-03 11:11:15.437880674 +0000 UTC m=+1086.467881928" watchObservedRunningTime="2025-12-03 11:11:15.43838917 +0000 UTC m=+1086.468390414" Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.451708 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw" event={"ID":"53d098b0-4833-4b74-b13d-57a4d9c5ee13","Type":"ContainerStarted","Data":"14a7739ed1a67ee92d5db102b8abff2f332bcfa94730a41de8a80a269658bf78"} Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.452696 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw" Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.462452 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8" event={"ID":"02ff397c-db50-4b3b-be2c-b43dcd1c71db","Type":"ContainerStarted","Data":"c29895ed4d625bb5ef3a2c29a64fb5031053f2aed3855972345371c29fc86963"} Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.462832 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8" Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.484227 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-srv8g" event={"ID":"a5dd2be8-1335-43f1-83af-2a0efabcce1e","Type":"ContainerStarted","Data":"77ce63812fe36d9b992508f6e9d551a5f8f426c7643ce9df2351e58d017915c9"} Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.488566 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5" podStartSLOduration=4.58368453 podStartE2EDuration="47.488549706s" podCreationTimestamp="2025-12-03 11:10:28 +0000 UTC" firstStartedPulling="2025-12-03 11:10:30.870185673 +0000 UTC m=+1041.900186917" lastFinishedPulling="2025-12-03 11:11:13.775050849 +0000 UTC m=+1084.805052093" observedRunningTime="2025-12-03 11:11:15.486341017 +0000 UTC m=+1086.516342281" watchObservedRunningTime="2025-12-03 11:11:15.488549706 +0000 UTC m=+1086.518550940" Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.510446 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-wlfff" event={"ID":"52cfac1b-a56f-4189-b9f0-5c4a8acf2069","Type":"ContainerStarted","Data":"320e3f96bcb9dc924262d45cd67d0b21c73a1524f77d9e2597eb4112f5ff2b92"} Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.515375 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p" podStartSLOduration=4.937601562 podStartE2EDuration="48.515346622s" podCreationTimestamp="2025-12-03 11:10:27 +0000 UTC" firstStartedPulling="2025-12-03 11:10:30.200761418 +0000 UTC m=+1041.230762662" lastFinishedPulling="2025-12-03 11:11:13.778506478 +0000 UTC m=+1084.808507722" observedRunningTime="2025-12-03 11:11:15.514531417 +0000 UTC m=+1086.544532681" watchObservedRunningTime="2025-12-03 11:11:15.515346622 +0000 UTC m=+1086.545347866" Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.562006 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8" podStartSLOduration=4.811298451 podStartE2EDuration="48.561978018s" podCreationTimestamp="2025-12-03 11:10:27 +0000 UTC" firstStartedPulling="2025-12-03 11:10:29.556926391 +0000 UTC m=+1040.586927635" lastFinishedPulling="2025-12-03 11:11:13.307605948 +0000 UTC m=+1084.337607202" observedRunningTime="2025-12-03 11:11:15.55758887 +0000 UTC m=+1086.587590114" watchObservedRunningTime="2025-12-03 11:11:15.561978018 +0000 UTC m=+1086.591979262" Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.597892 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw" podStartSLOduration=5.29298349 podStartE2EDuration="47.597866358s" podCreationTimestamp="2025-12-03 11:10:28 +0000 UTC" firstStartedPulling="2025-12-03 11:10:31.003138193 +0000 UTC m=+1042.033139437" lastFinishedPulling="2025-12-03 11:11:13.308021061 +0000 UTC m=+1084.338022305" observedRunningTime="2025-12-03 11:11:15.59635884 +0000 UTC m=+1086.626360094" watchObservedRunningTime="2025-12-03 11:11:15.597866358 +0000 UTC m=+1086.627867602" Dec 03 11:11:15 crc kubenswrapper[4756]: I1203 11:11:15.635617 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-wlfff" podStartSLOduration=4.860645135 podStartE2EDuration="47.635583665s" podCreationTimestamp="2025-12-03 11:10:28 +0000 UTC" firstStartedPulling="2025-12-03 11:10:31.004219057 +0000 UTC m=+1042.034220301" lastFinishedPulling="2025-12-03 11:11:13.779157597 +0000 UTC m=+1084.809158831" observedRunningTime="2025-12-03 11:11:15.628863756 +0000 UTC m=+1086.658865010" watchObservedRunningTime="2025-12-03 11:11:15.635583665 +0000 UTC m=+1086.665584909" Dec 03 11:11:16 crc kubenswrapper[4756]: I1203 11:11:16.518345 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc" event={"ID":"ded35123-da5e-4a26-9caf-61d6c9d920cd","Type":"ContainerStarted","Data":"857036f5070a4834043e93d2bf91a5ffdf818f57c8367510fc2a740f046bc4cb"} Dec 03 11:11:16 crc kubenswrapper[4756]: I1203 11:11:16.518680 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc" Dec 03 11:11:16 crc kubenswrapper[4756]: I1203 11:11:16.521556 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-62955" event={"ID":"f651f26f-55f4-47cb-a318-0e2a9512f194","Type":"ContainerStarted","Data":"9952a53d468ee6a223a1f7e98c5dfedcf9a51d4ad7d0d0849d293b0b0562b332"} Dec 03 11:11:16 crc kubenswrapper[4756]: I1203 11:11:16.521859 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-62955" Dec 03 11:11:16 crc kubenswrapper[4756]: I1203 11:11:16.525467 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn" event={"ID":"ef02c569-7cc5-43a3-a4e9-d8c97cc07465","Type":"ContainerStarted","Data":"903032214bddabf682c64a3ece82cd9ea36ff3486ddc8cfb34a4db79225bc652"} Dec 03 11:11:16 crc kubenswrapper[4756]: I1203 11:11:16.525528 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn" Dec 03 11:11:16 crc kubenswrapper[4756]: I1203 11:11:16.526879 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-wlfff" Dec 03 11:11:16 crc kubenswrapper[4756]: I1203 11:11:16.577274 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc" podStartSLOduration=4.207469727 podStartE2EDuration="48.577243659s" podCreationTimestamp="2025-12-03 11:10:28 +0000 UTC" firstStartedPulling="2025-12-03 11:10:30.816502678 +0000 UTC m=+1041.846503912" lastFinishedPulling="2025-12-03 11:11:15.1862766 +0000 UTC m=+1086.216277844" observedRunningTime="2025-12-03 11:11:16.575742762 +0000 UTC m=+1087.605744016" watchObservedRunningTime="2025-12-03 11:11:16.577243659 +0000 UTC m=+1087.607244903" Dec 03 11:11:16 crc kubenswrapper[4756]: I1203 11:11:16.649576 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn" podStartSLOduration=4.679310063 podStartE2EDuration="48.649539215s" podCreationTimestamp="2025-12-03 11:10:28 +0000 UTC" firstStartedPulling="2025-12-03 11:10:31.005526567 +0000 UTC m=+1042.035527801" lastFinishedPulling="2025-12-03 11:11:14.975755709 +0000 UTC m=+1086.005756953" observedRunningTime="2025-12-03 11:11:16.640095771 +0000 UTC m=+1087.670097015" watchObservedRunningTime="2025-12-03 11:11:16.649539215 +0000 UTC m=+1087.679540459" Dec 03 11:11:16 crc kubenswrapper[4756]: I1203 11:11:16.683764 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-62955" podStartSLOduration=4.504012858 podStartE2EDuration="49.683736643s" podCreationTimestamp="2025-12-03 11:10:27 +0000 UTC" firstStartedPulling="2025-12-03 11:10:29.74657208 +0000 UTC m=+1040.776573324" lastFinishedPulling="2025-12-03 11:11:14.926295865 +0000 UTC m=+1085.956297109" observedRunningTime="2025-12-03 11:11:16.680332186 +0000 UTC m=+1087.710333430" watchObservedRunningTime="2025-12-03 11:11:16.683736643 +0000 UTC m=+1087.713737887" Dec 03 11:11:19 crc kubenswrapper[4756]: I1203 11:11:19.147345 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-9t6v5" Dec 03 11:11:20 crc kubenswrapper[4756]: I1203 11:11:20.342335 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-hpd8p" Dec 03 11:11:24 crc kubenswrapper[4756]: I1203 11:11:24.929687 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4" Dec 03 11:11:28 crc kubenswrapper[4756]: I1203 11:11:28.129334 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-cg2g8" Dec 03 11:11:28 crc kubenswrapper[4756]: I1203 11:11:28.247377 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-62955" Dec 03 11:11:28 crc kubenswrapper[4756]: I1203 11:11:28.589079 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bgw6p" Dec 03 11:11:28 crc kubenswrapper[4756]: I1203 11:11:28.759529 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-xdlgn" Dec 03 11:11:28 crc kubenswrapper[4756]: I1203 11:11:28.845615 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-cxgxw" Dec 03 11:11:28 crc kubenswrapper[4756]: I1203 11:11:28.949160 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-wlfff" Dec 03 11:11:29 crc kubenswrapper[4756]: I1203 11:11:29.102098 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-9r6sc" Dec 03 11:11:38 crc kubenswrapper[4756]: I1203 11:11:38.805676 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p" event={"ID":"0b3df789-34ca-4ab2-8fb7-a8fee4df46a7","Type":"ContainerStarted","Data":"e3de4463b7525b218398013bf54b37864384d0468da054c2ffdcac7c04078c88"} Dec 03 11:11:38 crc kubenswrapper[4756]: I1203 11:11:38.829736 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fkx8p" podStartSLOduration=3.63518236 podStartE2EDuration="1m10.829707944s" podCreationTimestamp="2025-12-03 11:10:28 +0000 UTC" firstStartedPulling="2025-12-03 11:10:31.082048156 +0000 UTC m=+1042.112049400" lastFinishedPulling="2025-12-03 11:11:38.27657374 +0000 UTC m=+1109.306574984" observedRunningTime="2025-12-03 11:11:38.825646958 +0000 UTC m=+1109.855648202" watchObservedRunningTime="2025-12-03 11:11:38.829707944 +0000 UTC m=+1109.859709188" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.607410 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.609961 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.632185 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8n5sh"] Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.633679 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8n5sh" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.636787 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.637302 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.638106 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.642069 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7c4tn" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.648430 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8n5sh"] Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.689549 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86b3fac1-8598-44ff-a5a3-326c326e7c48-config\") pod \"dnsmasq-dns-675f4bcbfc-8n5sh\" (UID: \"86b3fac1-8598-44ff-a5a3-326c326e7c48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8n5sh" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.690179 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bghv\" (UniqueName: \"kubernetes.io/projected/86b3fac1-8598-44ff-a5a3-326c326e7c48-kube-api-access-4bghv\") pod \"dnsmasq-dns-675f4bcbfc-8n5sh\" (UID: \"86b3fac1-8598-44ff-a5a3-326c326e7c48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8n5sh" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.704435 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gn9pq"] Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.706162 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.709803 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.721718 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gn9pq"] Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.791594 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bghv\" (UniqueName: \"kubernetes.io/projected/86b3fac1-8598-44ff-a5a3-326c326e7c48-kube-api-access-4bghv\") pod \"dnsmasq-dns-675f4bcbfc-8n5sh\" (UID: \"86b3fac1-8598-44ff-a5a3-326c326e7c48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8n5sh" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.791664 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2239bf3b-53a5-48da-ab28-fe323c8870cb-config\") pod \"dnsmasq-dns-78dd6ddcc-gn9pq\" (UID: \"2239bf3b-53a5-48da-ab28-fe323c8870cb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.791709 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2239bf3b-53a5-48da-ab28-fe323c8870cb-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gn9pq\" (UID: \"2239bf3b-53a5-48da-ab28-fe323c8870cb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.791742 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z47p\" (UniqueName: \"kubernetes.io/projected/2239bf3b-53a5-48da-ab28-fe323c8870cb-kube-api-access-7z47p\") pod \"dnsmasq-dns-78dd6ddcc-gn9pq\" (UID: \"2239bf3b-53a5-48da-ab28-fe323c8870cb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.791769 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86b3fac1-8598-44ff-a5a3-326c326e7c48-config\") pod \"dnsmasq-dns-675f4bcbfc-8n5sh\" (UID: \"86b3fac1-8598-44ff-a5a3-326c326e7c48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8n5sh" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.792864 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86b3fac1-8598-44ff-a5a3-326c326e7c48-config\") pod \"dnsmasq-dns-675f4bcbfc-8n5sh\" (UID: \"86b3fac1-8598-44ff-a5a3-326c326e7c48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8n5sh" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.820016 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bghv\" (UniqueName: \"kubernetes.io/projected/86b3fac1-8598-44ff-a5a3-326c326e7c48-kube-api-access-4bghv\") pod \"dnsmasq-dns-675f4bcbfc-8n5sh\" (UID: \"86b3fac1-8598-44ff-a5a3-326c326e7c48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8n5sh" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.893128 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2239bf3b-53a5-48da-ab28-fe323c8870cb-config\") pod \"dnsmasq-dns-78dd6ddcc-gn9pq\" (UID: \"2239bf3b-53a5-48da-ab28-fe323c8870cb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.893747 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2239bf3b-53a5-48da-ab28-fe323c8870cb-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gn9pq\" (UID: \"2239bf3b-53a5-48da-ab28-fe323c8870cb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.893813 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z47p\" (UniqueName: \"kubernetes.io/projected/2239bf3b-53a5-48da-ab28-fe323c8870cb-kube-api-access-7z47p\") pod \"dnsmasq-dns-78dd6ddcc-gn9pq\" (UID: \"2239bf3b-53a5-48da-ab28-fe323c8870cb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.894220 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2239bf3b-53a5-48da-ab28-fe323c8870cb-config\") pod \"dnsmasq-dns-78dd6ddcc-gn9pq\" (UID: \"2239bf3b-53a5-48da-ab28-fe323c8870cb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.895081 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2239bf3b-53a5-48da-ab28-fe323c8870cb-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gn9pq\" (UID: \"2239bf3b-53a5-48da-ab28-fe323c8870cb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.916127 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z47p\" (UniqueName: \"kubernetes.io/projected/2239bf3b-53a5-48da-ab28-fe323c8870cb-kube-api-access-7z47p\") pod \"dnsmasq-dns-78dd6ddcc-gn9pq\" (UID: \"2239bf3b-53a5-48da-ab28-fe323c8870cb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" Dec 03 11:11:52 crc kubenswrapper[4756]: I1203 11:11:52.961045 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8n5sh" Dec 03 11:11:53 crc kubenswrapper[4756]: I1203 11:11:53.025509 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" Dec 03 11:11:53 crc kubenswrapper[4756]: I1203 11:11:53.298445 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8n5sh"] Dec 03 11:11:53 crc kubenswrapper[4756]: I1203 11:11:53.618283 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gn9pq"] Dec 03 11:11:53 crc kubenswrapper[4756]: I1203 11:11:53.939722 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8n5sh" event={"ID":"86b3fac1-8598-44ff-a5a3-326c326e7c48","Type":"ContainerStarted","Data":"1fc9a6a3647ec236f55ad3e9d0317e214efd5e87a302a5d4b228e09dbcf769d3"} Dec 03 11:11:53 crc kubenswrapper[4756]: I1203 11:11:53.941335 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" event={"ID":"2239bf3b-53a5-48da-ab28-fe323c8870cb","Type":"ContainerStarted","Data":"807b66e142a8d65d052e1387c294cd7aa07c8f5de44b7cccfc83e6f20ba5d044"} Dec 03 11:11:55 crc kubenswrapper[4756]: I1203 11:11:55.817258 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8n5sh"] Dec 03 11:11:55 crc kubenswrapper[4756]: I1203 11:11:55.838846 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-b8kgq"] Dec 03 11:11:55 crc kubenswrapper[4756]: I1203 11:11:55.842779 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:11:55 crc kubenswrapper[4756]: I1203 11:11:55.877015 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-b8kgq"] Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.012756 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b8749e2-448f-47ea-88a5-fea22b3edf54-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-b8kgq\" (UID: \"1b8749e2-448f-47ea-88a5-fea22b3edf54\") " pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.013323 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g745s\" (UniqueName: \"kubernetes.io/projected/1b8749e2-448f-47ea-88a5-fea22b3edf54-kube-api-access-g745s\") pod \"dnsmasq-dns-5ccc8479f9-b8kgq\" (UID: \"1b8749e2-448f-47ea-88a5-fea22b3edf54\") " pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.013390 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b8749e2-448f-47ea-88a5-fea22b3edf54-config\") pod \"dnsmasq-dns-5ccc8479f9-b8kgq\" (UID: \"1b8749e2-448f-47ea-88a5-fea22b3edf54\") " pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.115440 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b8749e2-448f-47ea-88a5-fea22b3edf54-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-b8kgq\" (UID: \"1b8749e2-448f-47ea-88a5-fea22b3edf54\") " pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.115522 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g745s\" (UniqueName: \"kubernetes.io/projected/1b8749e2-448f-47ea-88a5-fea22b3edf54-kube-api-access-g745s\") pod \"dnsmasq-dns-5ccc8479f9-b8kgq\" (UID: \"1b8749e2-448f-47ea-88a5-fea22b3edf54\") " pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.115617 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b8749e2-448f-47ea-88a5-fea22b3edf54-config\") pod \"dnsmasq-dns-5ccc8479f9-b8kgq\" (UID: \"1b8749e2-448f-47ea-88a5-fea22b3edf54\") " pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.116756 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b8749e2-448f-47ea-88a5-fea22b3edf54-config\") pod \"dnsmasq-dns-5ccc8479f9-b8kgq\" (UID: \"1b8749e2-448f-47ea-88a5-fea22b3edf54\") " pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.116756 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b8749e2-448f-47ea-88a5-fea22b3edf54-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-b8kgq\" (UID: \"1b8749e2-448f-47ea-88a5-fea22b3edf54\") " pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.149123 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g745s\" (UniqueName: \"kubernetes.io/projected/1b8749e2-448f-47ea-88a5-fea22b3edf54-kube-api-access-g745s\") pod \"dnsmasq-dns-5ccc8479f9-b8kgq\" (UID: \"1b8749e2-448f-47ea-88a5-fea22b3edf54\") " pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.194083 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gn9pq"] Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.194490 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.240128 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-n57vc"] Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.243602 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.256935 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-n57vc"] Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.423920 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph8j2\" (UniqueName: \"kubernetes.io/projected/44e97327-85df-405e-afa1-46b2105ccf65-kube-api-access-ph8j2\") pod \"dnsmasq-dns-57d769cc4f-n57vc\" (UID: \"44e97327-85df-405e-afa1-46b2105ccf65\") " pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.424029 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44e97327-85df-405e-afa1-46b2105ccf65-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-n57vc\" (UID: \"44e97327-85df-405e-afa1-46b2105ccf65\") " pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.424067 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44e97327-85df-405e-afa1-46b2105ccf65-config\") pod \"dnsmasq-dns-57d769cc4f-n57vc\" (UID: \"44e97327-85df-405e-afa1-46b2105ccf65\") " pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.531425 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph8j2\" (UniqueName: \"kubernetes.io/projected/44e97327-85df-405e-afa1-46b2105ccf65-kube-api-access-ph8j2\") pod \"dnsmasq-dns-57d769cc4f-n57vc\" (UID: \"44e97327-85df-405e-afa1-46b2105ccf65\") " pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.531495 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44e97327-85df-405e-afa1-46b2105ccf65-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-n57vc\" (UID: \"44e97327-85df-405e-afa1-46b2105ccf65\") " pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.531527 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44e97327-85df-405e-afa1-46b2105ccf65-config\") pod \"dnsmasq-dns-57d769cc4f-n57vc\" (UID: \"44e97327-85df-405e-afa1-46b2105ccf65\") " pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.532926 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44e97327-85df-405e-afa1-46b2105ccf65-config\") pod \"dnsmasq-dns-57d769cc4f-n57vc\" (UID: \"44e97327-85df-405e-afa1-46b2105ccf65\") " pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.533883 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44e97327-85df-405e-afa1-46b2105ccf65-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-n57vc\" (UID: \"44e97327-85df-405e-afa1-46b2105ccf65\") " pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.554125 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph8j2\" (UniqueName: \"kubernetes.io/projected/44e97327-85df-405e-afa1-46b2105ccf65-kube-api-access-ph8j2\") pod \"dnsmasq-dns-57d769cc4f-n57vc\" (UID: \"44e97327-85df-405e-afa1-46b2105ccf65\") " pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.569820 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.793525 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-b8kgq"] Dec 03 11:11:56 crc kubenswrapper[4756]: W1203 11:11:56.835266 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b8749e2_448f_47ea_88a5_fea22b3edf54.slice/crio-3e1b52c4f8dd46a9b030d3148c2af79308b62c66ccf347f46cb575a6b0412ba6 WatchSource:0}: Error finding container 3e1b52c4f8dd46a9b030d3148c2af79308b62c66ccf347f46cb575a6b0412ba6: Status 404 returned error can't find the container with id 3e1b52c4f8dd46a9b030d3148c2af79308b62c66ccf347f46cb575a6b0412ba6 Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.965591 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.968041 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.979398 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.980544 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.984522 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.984572 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-sm9m8" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.984744 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.984927 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 03 11:11:56 crc kubenswrapper[4756]: I1203 11:11:56.985884 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.004196 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" event={"ID":"1b8749e2-448f-47ea-88a5-fea22b3edf54","Type":"ContainerStarted","Data":"3e1b52c4f8dd46a9b030d3148c2af79308b62c66ccf347f46cb575a6b0412ba6"} Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.007793 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.144045 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.144135 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.144296 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4570f01f-6639-41a5-9201-c49ed4fdefa8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.144453 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.144621 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4570f01f-6639-41a5-9201-c49ed4fdefa8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.144700 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.144731 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.144828 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp5fx\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-kube-api-access-pp5fx\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.144885 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.144942 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.145204 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.215650 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-n57vc"] Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.248010 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.248078 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.248112 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.248141 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4570f01f-6639-41a5-9201-c49ed4fdefa8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.248172 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.248206 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4570f01f-6639-41a5-9201-c49ed4fdefa8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.248232 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.248250 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.248274 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp5fx\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-kube-api-access-pp5fx\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.248297 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.248320 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.249531 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.250402 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.250796 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.256013 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.256264 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.257114 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.262032 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.270947 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4570f01f-6639-41a5-9201-c49ed4fdefa8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.272440 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4570f01f-6639-41a5-9201-c49ed4fdefa8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.281273 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp5fx\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-kube-api-access-pp5fx\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.295452 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.313307 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.373079 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.375577 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.379830 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.379922 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.380039 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.380147 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.379942 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6gvvv" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.380321 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.380378 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.396744 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.555695 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.555755 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-config-data\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.555859 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.555910 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.555944 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.555984 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.556044 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.556078 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.556138 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.556251 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.556326 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tdwd\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-kube-api-access-2tdwd\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.606727 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.658785 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.658864 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.658907 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tdwd\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-kube-api-access-2tdwd\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.659048 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.659081 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-config-data\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.659118 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.659172 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.659198 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.659220 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.659252 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.659277 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.659449 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.659793 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.661536 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.661872 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-config-data\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.663145 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.666172 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.668999 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.669378 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.669446 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.681619 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.686267 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tdwd\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-kube-api-access-2tdwd\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.709059 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " pod="openstack/rabbitmq-server-0" Dec 03 11:11:57 crc kubenswrapper[4756]: I1203 11:11:57.778780 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.138421 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" event={"ID":"44e97327-85df-405e-afa1-46b2105ccf65","Type":"ContainerStarted","Data":"39338d555909977ffa20f3eae526859444a30233fb0270a2b0214ba6aaf84349"} Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.337910 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.340520 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.348563 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.348831 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.351670 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7chws" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.352097 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.359589 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.360347 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.420177 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 11:11:58 crc kubenswrapper[4756]: W1203 11:11:58.436446 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4570f01f_6639_41a5_9201_c49ed4fdefa8.slice/crio-3a2d9146e97a66b32be0afb4ba71ba2a821e2aec18f6a7f6c60433a540266b4f WatchSource:0}: Error finding container 3a2d9146e97a66b32be0afb4ba71ba2a821e2aec18f6a7f6c60433a540266b4f: Status 404 returned error can't find the container with id 3a2d9146e97a66b32be0afb4ba71ba2a821e2aec18f6a7f6c60433a540266b4f Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.489187 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6980bf2-fd5f-4cb1-b148-414229444006-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.489254 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6980bf2-fd5f-4cb1-b148-414229444006-kolla-config\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.489290 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.489330 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4qrs\" (UniqueName: \"kubernetes.io/projected/d6980bf2-fd5f-4cb1-b148-414229444006-kube-api-access-f4qrs\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.489360 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6980bf2-fd5f-4cb1-b148-414229444006-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.489421 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d6980bf2-fd5f-4cb1-b148-414229444006-config-data-default\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.489447 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6980bf2-fd5f-4cb1-b148-414229444006-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.489533 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d6980bf2-fd5f-4cb1-b148-414229444006-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.599928 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d6980bf2-fd5f-4cb1-b148-414229444006-config-data-default\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.599984 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6980bf2-fd5f-4cb1-b148-414229444006-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.600026 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d6980bf2-fd5f-4cb1-b148-414229444006-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.600122 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6980bf2-fd5f-4cb1-b148-414229444006-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.600158 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6980bf2-fd5f-4cb1-b148-414229444006-kolla-config\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.600192 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.600232 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4qrs\" (UniqueName: \"kubernetes.io/projected/d6980bf2-fd5f-4cb1-b148-414229444006-kube-api-access-f4qrs\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.600260 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6980bf2-fd5f-4cb1-b148-414229444006-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.603515 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6980bf2-fd5f-4cb1-b148-414229444006-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.604100 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6980bf2-fd5f-4cb1-b148-414229444006-kolla-config\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.604401 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.606606 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d6980bf2-fd5f-4cb1-b148-414229444006-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.610986 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d6980bf2-fd5f-4cb1-b148-414229444006-config-data-default\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.623789 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6980bf2-fd5f-4cb1-b148-414229444006-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.624259 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6980bf2-fd5f-4cb1-b148-414229444006-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.653636 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4qrs\" (UniqueName: \"kubernetes.io/projected/d6980bf2-fd5f-4cb1-b148-414229444006-kube-api-access-f4qrs\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.669918 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"d6980bf2-fd5f-4cb1-b148-414229444006\") " pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.679423 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 03 11:11:58 crc kubenswrapper[4756]: I1203 11:11:58.711779 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 11:11:59 crc kubenswrapper[4756]: I1203 11:11:59.183496 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b8770c2d-c514-44e9-99d6-c8713f7f9ab1","Type":"ContainerStarted","Data":"ce20b540430634554e7fc9e5376c657949d69fbe4a6473d12267c8fd1fdc3570"} Dec 03 11:11:59 crc kubenswrapper[4756]: I1203 11:11:59.187751 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4570f01f-6639-41a5-9201-c49ed4fdefa8","Type":"ContainerStarted","Data":"3a2d9146e97a66b32be0afb4ba71ba2a821e2aec18f6a7f6c60433a540266b4f"} Dec 03 11:11:59 crc kubenswrapper[4756]: I1203 11:11:59.388349 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 03 11:11:59 crc kubenswrapper[4756]: I1203 11:11:59.902907 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 11:11:59 crc kubenswrapper[4756]: I1203 11:11:59.906900 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 03 11:11:59 crc kubenswrapper[4756]: I1203 11:11:59.910864 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 03 11:11:59 crc kubenswrapper[4756]: I1203 11:11:59.911530 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-69k89" Dec 03 11:11:59 crc kubenswrapper[4756]: I1203 11:11:59.911639 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 03 11:11:59 crc kubenswrapper[4756]: I1203 11:11:59.911651 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 03 11:11:59 crc kubenswrapper[4756]: I1203 11:11:59.916278 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.158026 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a8024d-1033-41f9-a53f-6c5119388b40-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.158086 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25a8024d-1033-41f9-a53f-6c5119388b40-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.158148 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25a8024d-1033-41f9-a53f-6c5119388b40-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.158182 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25a8024d-1033-41f9-a53f-6c5119388b40-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.158225 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25a8024d-1033-41f9-a53f-6c5119388b40-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.158256 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.158284 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25a8024d-1033-41f9-a53f-6c5119388b40-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.174596 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s2qj\" (UniqueName: \"kubernetes.io/projected/25a8024d-1033-41f9-a53f-6c5119388b40-kube-api-access-7s2qj\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.251234 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d6980bf2-fd5f-4cb1-b148-414229444006","Type":"ContainerStarted","Data":"a47c9fe62d70eaf579864e4af73fb94774697c9c187155b165a440be77f9fa5e"} Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.259141 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.260439 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.262965 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-x5b25" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.263126 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.264700 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.277516 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-config-data\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.277582 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25a8024d-1033-41f9-a53f-6c5119388b40-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.277622 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s2qj\" (UniqueName: \"kubernetes.io/projected/25a8024d-1033-41f9-a53f-6c5119388b40-kube-api-access-7s2qj\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.277647 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a8024d-1033-41f9-a53f-6c5119388b40-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.277683 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25a8024d-1033-41f9-a53f-6c5119388b40-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.277706 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.277743 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-kolla-config\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.277763 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.277788 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25a8024d-1033-41f9-a53f-6c5119388b40-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.277807 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25a8024d-1033-41f9-a53f-6c5119388b40-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.277834 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjth\" (UniqueName: \"kubernetes.io/projected/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-kube-api-access-wrjth\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.277867 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25a8024d-1033-41f9-a53f-6c5119388b40-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.277913 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.282390 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25a8024d-1033-41f9-a53f-6c5119388b40-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.283005 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.287164 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25a8024d-1033-41f9-a53f-6c5119388b40-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.293874 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25a8024d-1033-41f9-a53f-6c5119388b40-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.297878 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25a8024d-1033-41f9-a53f-6c5119388b40-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.331677 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25a8024d-1033-41f9-a53f-6c5119388b40-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.337973 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a8024d-1033-41f9-a53f-6c5119388b40-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.364758 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.380978 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.381049 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-kolla-config\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.381071 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.381109 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrjth\" (UniqueName: \"kubernetes.io/projected/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-kube-api-access-wrjth\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.382267 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-kolla-config\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.392819 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-config-data\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.390849 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.391370 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s2qj\" (UniqueName: \"kubernetes.io/projected/25a8024d-1033-41f9-a53f-6c5119388b40-kube-api-access-7s2qj\") pod \"openstack-cell1-galera-0\" (UID: \"25a8024d-1033-41f9-a53f-6c5119388b40\") " pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.393901 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.394057 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-config-data\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.394256 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.531549 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.553079 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrjth\" (UniqueName: \"kubernetes.io/projected/81dedd61-3ae8-42b1-8af2-20fe40b22eb7-kube-api-access-wrjth\") pod \"memcached-0\" (UID: \"81dedd61-3ae8-42b1-8af2-20fe40b22eb7\") " pod="openstack/memcached-0" Dec 03 11:12:00 crc kubenswrapper[4756]: I1203 11:12:00.744344 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 03 11:12:02 crc kubenswrapper[4756]: I1203 11:12:02.173171 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 11:12:02 crc kubenswrapper[4756]: I1203 11:12:02.175005 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 11:12:02 crc kubenswrapper[4756]: I1203 11:12:02.187158 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-n2jf8" Dec 03 11:12:02 crc kubenswrapper[4756]: I1203 11:12:02.191832 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fvdt\" (UniqueName: \"kubernetes.io/projected/9772219e-2495-4892-8977-52360ac83b0a-kube-api-access-6fvdt\") pod \"kube-state-metrics-0\" (UID: \"9772219e-2495-4892-8977-52360ac83b0a\") " pod="openstack/kube-state-metrics-0" Dec 03 11:12:02 crc kubenswrapper[4756]: I1203 11:12:02.203780 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 11:12:02 crc kubenswrapper[4756]: I1203 11:12:02.301396 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fvdt\" (UniqueName: \"kubernetes.io/projected/9772219e-2495-4892-8977-52360ac83b0a-kube-api-access-6fvdt\") pod \"kube-state-metrics-0\" (UID: \"9772219e-2495-4892-8977-52360ac83b0a\") " pod="openstack/kube-state-metrics-0" Dec 03 11:12:02 crc kubenswrapper[4756]: I1203 11:12:02.341614 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fvdt\" (UniqueName: \"kubernetes.io/projected/9772219e-2495-4892-8977-52360ac83b0a-kube-api-access-6fvdt\") pod \"kube-state-metrics-0\" (UID: \"9772219e-2495-4892-8977-52360ac83b0a\") " pod="openstack/kube-state-metrics-0" Dec 03 11:12:02 crc kubenswrapper[4756]: I1203 11:12:02.535669 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.460277 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xlz9h"] Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.464968 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.470305 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xlz9h"] Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.470870 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.470921 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-dvmqd" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.478794 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.561915 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e033887b-a32e-4141-9812-455b70f85d39-combined-ca-bundle\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.562005 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e033887b-a32e-4141-9812-455b70f85d39-var-run\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.562591 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e033887b-a32e-4141-9812-455b70f85d39-var-run-ovn\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.562621 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e033887b-a32e-4141-9812-455b70f85d39-ovn-controller-tls-certs\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.562668 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs792\" (UniqueName: \"kubernetes.io/projected/e033887b-a32e-4141-9812-455b70f85d39-kube-api-access-zs792\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.562710 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e033887b-a32e-4141-9812-455b70f85d39-scripts\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.562738 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e033887b-a32e-4141-9812-455b70f85d39-var-log-ovn\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.653375 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-269cd"] Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.657176 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.664380 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e033887b-a32e-4141-9812-455b70f85d39-combined-ca-bundle\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.664456 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e033887b-a32e-4141-9812-455b70f85d39-var-run\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.664517 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e033887b-a32e-4141-9812-455b70f85d39-var-run-ovn\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.664545 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e033887b-a32e-4141-9812-455b70f85d39-ovn-controller-tls-certs\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.664582 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs792\" (UniqueName: \"kubernetes.io/projected/e033887b-a32e-4141-9812-455b70f85d39-kube-api-access-zs792\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.664609 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e033887b-a32e-4141-9812-455b70f85d39-scripts\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.664642 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e033887b-a32e-4141-9812-455b70f85d39-var-log-ovn\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.665507 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e033887b-a32e-4141-9812-455b70f85d39-var-log-ovn\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.667307 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-269cd"] Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.667874 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e033887b-a32e-4141-9812-455b70f85d39-var-run-ovn\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.667977 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e033887b-a32e-4141-9812-455b70f85d39-var-run\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.670011 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e033887b-a32e-4141-9812-455b70f85d39-scripts\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.682736 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e033887b-a32e-4141-9812-455b70f85d39-ovn-controller-tls-certs\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.692043 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs792\" (UniqueName: \"kubernetes.io/projected/e033887b-a32e-4141-9812-455b70f85d39-kube-api-access-zs792\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.699695 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e033887b-a32e-4141-9812-455b70f85d39-combined-ca-bundle\") pod \"ovn-controller-xlz9h\" (UID: \"e033887b-a32e-4141-9812-455b70f85d39\") " pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.835144 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.936632 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a408782-00c0-46f6-8559-023f8753699e-scripts\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.936837 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1a408782-00c0-46f6-8559-023f8753699e-var-run\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.936875 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1a408782-00c0-46f6-8559-023f8753699e-etc-ovs\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.936899 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1a408782-00c0-46f6-8559-023f8753699e-var-lib\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.936979 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn5fl\" (UniqueName: \"kubernetes.io/projected/1a408782-00c0-46f6-8559-023f8753699e-kube-api-access-hn5fl\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:05 crc kubenswrapper[4756]: I1203 11:12:05.937063 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1a408782-00c0-46f6-8559-023f8753699e-var-log\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:06 crc kubenswrapper[4756]: I1203 11:12:06.072093 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1a408782-00c0-46f6-8559-023f8753699e-var-log\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:06 crc kubenswrapper[4756]: I1203 11:12:06.072172 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a408782-00c0-46f6-8559-023f8753699e-scripts\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:06 crc kubenswrapper[4756]: I1203 11:12:06.072215 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1a408782-00c0-46f6-8559-023f8753699e-var-run\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:06 crc kubenswrapper[4756]: I1203 11:12:06.072243 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1a408782-00c0-46f6-8559-023f8753699e-etc-ovs\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:06 crc kubenswrapper[4756]: I1203 11:12:06.072265 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1a408782-00c0-46f6-8559-023f8753699e-var-lib\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:06 crc kubenswrapper[4756]: I1203 11:12:06.072295 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn5fl\" (UniqueName: \"kubernetes.io/projected/1a408782-00c0-46f6-8559-023f8753699e-kube-api-access-hn5fl\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:06 crc kubenswrapper[4756]: I1203 11:12:06.073177 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1a408782-00c0-46f6-8559-023f8753699e-var-log\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:06 crc kubenswrapper[4756]: I1203 11:12:06.074615 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1a408782-00c0-46f6-8559-023f8753699e-etc-ovs\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:06 crc kubenswrapper[4756]: I1203 11:12:06.074711 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1a408782-00c0-46f6-8559-023f8753699e-var-run\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:06 crc kubenswrapper[4756]: I1203 11:12:06.074903 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1a408782-00c0-46f6-8559-023f8753699e-var-lib\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:06 crc kubenswrapper[4756]: I1203 11:12:06.075349 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a408782-00c0-46f6-8559-023f8753699e-scripts\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:06 crc kubenswrapper[4756]: I1203 11:12:06.113016 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn5fl\" (UniqueName: \"kubernetes.io/projected/1a408782-00c0-46f6-8559-023f8753699e-kube-api-access-hn5fl\") pod \"ovn-controller-ovs-269cd\" (UID: \"1a408782-00c0-46f6-8559-023f8753699e\") " pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:06 crc kubenswrapper[4756]: I1203 11:12:06.356352 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.714375 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.716380 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.720677 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.720688 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.720687 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.721293 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-l294c" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.721453 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.730657 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.824099 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.824152 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e429c7e2-748f-4231-902c-00290ebe9eb9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.824253 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e429c7e2-748f-4231-902c-00290ebe9eb9-config\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.824410 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e429c7e2-748f-4231-902c-00290ebe9eb9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.824518 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e429c7e2-748f-4231-902c-00290ebe9eb9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.824573 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7r8v\" (UniqueName: \"kubernetes.io/projected/e429c7e2-748f-4231-902c-00290ebe9eb9-kube-api-access-p7r8v\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.824667 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e429c7e2-748f-4231-902c-00290ebe9eb9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.824739 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e429c7e2-748f-4231-902c-00290ebe9eb9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.926857 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e429c7e2-748f-4231-902c-00290ebe9eb9-config\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.927003 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e429c7e2-748f-4231-902c-00290ebe9eb9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.927064 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e429c7e2-748f-4231-902c-00290ebe9eb9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.927122 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7r8v\" (UniqueName: \"kubernetes.io/projected/e429c7e2-748f-4231-902c-00290ebe9eb9-kube-api-access-p7r8v\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.927181 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e429c7e2-748f-4231-902c-00290ebe9eb9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.927238 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e429c7e2-748f-4231-902c-00290ebe9eb9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.927292 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.927333 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e429c7e2-748f-4231-902c-00290ebe9eb9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.929666 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e429c7e2-748f-4231-902c-00290ebe9eb9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.929861 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e429c7e2-748f-4231-902c-00290ebe9eb9-config\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.929948 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.933565 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e429c7e2-748f-4231-902c-00290ebe9eb9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.939636 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e429c7e2-748f-4231-902c-00290ebe9eb9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.939649 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e429c7e2-748f-4231-902c-00290ebe9eb9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.946640 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e429c7e2-748f-4231-902c-00290ebe9eb9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.948377 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7r8v\" (UniqueName: \"kubernetes.io/projected/e429c7e2-748f-4231-902c-00290ebe9eb9-kube-api-access-p7r8v\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:07 crc kubenswrapper[4756]: I1203 11:12:07.982295 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e429c7e2-748f-4231-902c-00290ebe9eb9\") " pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:08 crc kubenswrapper[4756]: I1203 11:12:08.048222 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.800263 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.802993 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.806667 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-zjmdz" Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.808769 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.809379 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.809881 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.824140 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.970024 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77eb8ce5-5779-43bf-a57b-7ace73542f58-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.970112 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77eb8ce5-5779-43bf-a57b-7ace73542f58-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.970181 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.970290 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77eb8ce5-5779-43bf-a57b-7ace73542f58-config\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.970726 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77eb8ce5-5779-43bf-a57b-7ace73542f58-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.971092 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/77eb8ce5-5779-43bf-a57b-7ace73542f58-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.971283 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5mqs\" (UniqueName: \"kubernetes.io/projected/77eb8ce5-5779-43bf-a57b-7ace73542f58-kube-api-access-n5mqs\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:09 crc kubenswrapper[4756]: I1203 11:12:09.971416 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/77eb8ce5-5779-43bf-a57b-7ace73542f58-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.075119 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77eb8ce5-5779-43bf-a57b-7ace73542f58-config\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.075740 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77eb8ce5-5779-43bf-a57b-7ace73542f58-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.076030 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/77eb8ce5-5779-43bf-a57b-7ace73542f58-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.076207 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5mqs\" (UniqueName: \"kubernetes.io/projected/77eb8ce5-5779-43bf-a57b-7ace73542f58-kube-api-access-n5mqs\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.076462 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77eb8ce5-5779-43bf-a57b-7ace73542f58-config\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.076717 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/77eb8ce5-5779-43bf-a57b-7ace73542f58-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.076929 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77eb8ce5-5779-43bf-a57b-7ace73542f58-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.077711 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77eb8ce5-5779-43bf-a57b-7ace73542f58-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.078409 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.077185 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/77eb8ce5-5779-43bf-a57b-7ace73542f58-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.078932 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77eb8ce5-5779-43bf-a57b-7ace73542f58-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.079709 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.082112 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77eb8ce5-5779-43bf-a57b-7ace73542f58-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.083634 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/77eb8ce5-5779-43bf-a57b-7ace73542f58-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.084511 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/77eb8ce5-5779-43bf-a57b-7ace73542f58-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.095859 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5mqs\" (UniqueName: \"kubernetes.io/projected/77eb8ce5-5779-43bf-a57b-7ace73542f58-kube-api-access-n5mqs\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.114485 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"77eb8ce5-5779-43bf-a57b-7ace73542f58\") " pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:10 crc kubenswrapper[4756]: I1203 11:12:10.133591 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:22 crc kubenswrapper[4756]: I1203 11:12:22.607540 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:12:22 crc kubenswrapper[4756]: I1203 11:12:22.608548 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:12:25 crc kubenswrapper[4756]: E1203 11:12:25.196178 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 11:12:25 crc kubenswrapper[4756]: E1203 11:12:25.196748 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ph8j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-n57vc_openstack(44e97327-85df-405e-afa1-46b2105ccf65): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:12:25 crc kubenswrapper[4756]: E1203 11:12:25.198088 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" podUID="44e97327-85df-405e-afa1-46b2105ccf65" Dec 03 11:12:25 crc kubenswrapper[4756]: E1203 11:12:25.223266 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 11:12:25 crc kubenswrapper[4756]: E1203 11:12:25.223485 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8n5sh_openstack(86b3fac1-8598-44ff-a5a3-326c326e7c48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:12:25 crc kubenswrapper[4756]: E1203 11:12:25.224860 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8n5sh" podUID="86b3fac1-8598-44ff-a5a3-326c326e7c48" Dec 03 11:12:25 crc kubenswrapper[4756]: E1203 11:12:25.611738 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" podUID="44e97327-85df-405e-afa1-46b2105ccf65" Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.366975 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.367812 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pp5fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(4570f01f-6639-41a5-9201-c49ed4fdefa8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.369094 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="4570f01f-6639-41a5-9201-c49ed4fdefa8" Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.424944 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.425176 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7z47p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-gn9pq_openstack(2239bf3b-53a5-48da-ab28-fe323c8870cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.426467 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" podUID="2239bf3b-53a5-48da-ab28-fe323c8870cb" Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.429394 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.429580 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g745s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-b8kgq_openstack(1b8749e2-448f-47ea-88a5-fea22b3edf54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.431120 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" podUID="1b8749e2-448f-47ea-88a5-fea22b3edf54" Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.439423 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.439628 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2tdwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(b8770c2d-c514-44e9-99d6-c8713f7f9ab1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.440859 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="b8770c2d-c514-44e9-99d6-c8713f7f9ab1" Dec 03 11:12:26 crc kubenswrapper[4756]: I1203 11:12:26.473876 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8n5sh" Dec 03 11:12:26 crc kubenswrapper[4756]: I1203 11:12:26.495933 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86b3fac1-8598-44ff-a5a3-326c326e7c48-config\") pod \"86b3fac1-8598-44ff-a5a3-326c326e7c48\" (UID: \"86b3fac1-8598-44ff-a5a3-326c326e7c48\") " Dec 03 11:12:26 crc kubenswrapper[4756]: I1203 11:12:26.496211 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bghv\" (UniqueName: \"kubernetes.io/projected/86b3fac1-8598-44ff-a5a3-326c326e7c48-kube-api-access-4bghv\") pod \"86b3fac1-8598-44ff-a5a3-326c326e7c48\" (UID: \"86b3fac1-8598-44ff-a5a3-326c326e7c48\") " Dec 03 11:12:26 crc kubenswrapper[4756]: I1203 11:12:26.497710 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86b3fac1-8598-44ff-a5a3-326c326e7c48-config" (OuterVolumeSpecName: "config") pod "86b3fac1-8598-44ff-a5a3-326c326e7c48" (UID: "86b3fac1-8598-44ff-a5a3-326c326e7c48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:12:26 crc kubenswrapper[4756]: I1203 11:12:26.503723 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b3fac1-8598-44ff-a5a3-326c326e7c48-kube-api-access-4bghv" (OuterVolumeSpecName: "kube-api-access-4bghv") pod "86b3fac1-8598-44ff-a5a3-326c326e7c48" (UID: "86b3fac1-8598-44ff-a5a3-326c326e7c48"). InnerVolumeSpecName "kube-api-access-4bghv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:12:26 crc kubenswrapper[4756]: I1203 11:12:26.770529 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bghv\" (UniqueName: \"kubernetes.io/projected/86b3fac1-8598-44ff-a5a3-326c326e7c48-kube-api-access-4bghv\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:26 crc kubenswrapper[4756]: I1203 11:12:26.770586 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86b3fac1-8598-44ff-a5a3-326c326e7c48-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:26 crc kubenswrapper[4756]: I1203 11:12:26.786618 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8n5sh" Dec 03 11:12:26 crc kubenswrapper[4756]: I1203 11:12:26.787825 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8n5sh" event={"ID":"86b3fac1-8598-44ff-a5a3-326c326e7c48","Type":"ContainerDied","Data":"1fc9a6a3647ec236f55ad3e9d0317e214efd5e87a302a5d4b228e09dbcf769d3"} Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.788098 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="4570f01f-6639-41a5-9201-c49ed4fdefa8" Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.788655 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="b8770c2d-c514-44e9-99d6-c8713f7f9ab1" Dec 03 11:12:26 crc kubenswrapper[4756]: E1203 11:12:26.788729 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" podUID="1b8749e2-448f-47ea-88a5-fea22b3edf54" Dec 03 11:12:26 crc kubenswrapper[4756]: I1203 11:12:26.963563 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8n5sh"] Dec 03 11:12:26 crc kubenswrapper[4756]: I1203 11:12:26.977938 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8n5sh"] Dec 03 11:12:27 crc kubenswrapper[4756]: I1203 11:12:27.246604 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86b3fac1-8598-44ff-a5a3-326c326e7c48" path="/var/lib/kubelet/pods/86b3fac1-8598-44ff-a5a3-326c326e7c48/volumes" Dec 03 11:12:33 crc kubenswrapper[4756]: E1203 11:12:33.900930 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 03 11:12:33 crc kubenswrapper[4756]: E1203 11:12:33.902197 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4qrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(d6980bf2-fd5f-4cb1-b148-414229444006): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:12:33 crc kubenswrapper[4756]: E1203 11:12:33.905263 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="d6980bf2-fd5f-4cb1-b148-414229444006" Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.058454 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.283918 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z47p\" (UniqueName: \"kubernetes.io/projected/2239bf3b-53a5-48da-ab28-fe323c8870cb-kube-api-access-7z47p\") pod \"2239bf3b-53a5-48da-ab28-fe323c8870cb\" (UID: \"2239bf3b-53a5-48da-ab28-fe323c8870cb\") " Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.284088 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2239bf3b-53a5-48da-ab28-fe323c8870cb-config\") pod \"2239bf3b-53a5-48da-ab28-fe323c8870cb\" (UID: \"2239bf3b-53a5-48da-ab28-fe323c8870cb\") " Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.284341 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2239bf3b-53a5-48da-ab28-fe323c8870cb-dns-svc\") pod \"2239bf3b-53a5-48da-ab28-fe323c8870cb\" (UID: \"2239bf3b-53a5-48da-ab28-fe323c8870cb\") " Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.285048 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2239bf3b-53a5-48da-ab28-fe323c8870cb-config" (OuterVolumeSpecName: "config") pod "2239bf3b-53a5-48da-ab28-fe323c8870cb" (UID: "2239bf3b-53a5-48da-ab28-fe323c8870cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.285235 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2239bf3b-53a5-48da-ab28-fe323c8870cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2239bf3b-53a5-48da-ab28-fe323c8870cb" (UID: "2239bf3b-53a5-48da-ab28-fe323c8870cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.305335 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2239bf3b-53a5-48da-ab28-fe323c8870cb-kube-api-access-7z47p" (OuterVolumeSpecName: "kube-api-access-7z47p") pod "2239bf3b-53a5-48da-ab28-fe323c8870cb" (UID: "2239bf3b-53a5-48da-ab28-fe323c8870cb"). InnerVolumeSpecName "kube-api-access-7z47p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.387769 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z47p\" (UniqueName: \"kubernetes.io/projected/2239bf3b-53a5-48da-ab28-fe323c8870cb-kube-api-access-7z47p\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.388624 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2239bf3b-53a5-48da-ab28-fe323c8870cb-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.388654 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2239bf3b-53a5-48da-ab28-fe323c8870cb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.774114 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.779416 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xlz9h"] Dec 03 11:12:34 crc kubenswrapper[4756]: W1203 11:12:34.794894 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode033887b_a32e_4141_9812_455b70f85d39.slice/crio-0da12aa4d7ab58111709b2e4b423c62c380bae5a8f616d423464a3c0f0878fe6 WatchSource:0}: Error finding container 0da12aa4d7ab58111709b2e4b423c62c380bae5a8f616d423464a3c0f0878fe6: Status 404 returned error can't find the container with id 0da12aa4d7ab58111709b2e4b423c62c380bae5a8f616d423464a3c0f0878fe6 Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.861843 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.884227 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" event={"ID":"2239bf3b-53a5-48da-ab28-fe323c8870cb","Type":"ContainerDied","Data":"807b66e142a8d65d052e1387c294cd7aa07c8f5de44b7cccfc83e6f20ba5d044"} Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.884342 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gn9pq" Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.894164 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.895926 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25a8024d-1033-41f9-a53f-6c5119388b40","Type":"ContainerStarted","Data":"7c687bd5e5d0ed8bd32faf9cfd0b196c8560f65f01057cd62eb1d50854c7477f"} Dec 03 11:12:34 crc kubenswrapper[4756]: I1203 11:12:34.897586 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xlz9h" event={"ID":"e033887b-a32e-4141-9812-455b70f85d39","Type":"ContainerStarted","Data":"0da12aa4d7ab58111709b2e4b423c62c380bae5a8f616d423464a3c0f0878fe6"} Dec 03 11:12:34 crc kubenswrapper[4756]: E1203 11:12:34.907455 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="d6980bf2-fd5f-4cb1-b148-414229444006" Dec 03 11:12:34 crc kubenswrapper[4756]: W1203 11:12:34.917912 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81dedd61_3ae8_42b1_8af2_20fe40b22eb7.slice/crio-1c8638f2f847300df610d87031f8e189e8427b6386c6f297f4c5aab48844ef85 WatchSource:0}: Error finding container 1c8638f2f847300df610d87031f8e189e8427b6386c6f297f4c5aab48844ef85: Status 404 returned error can't find the container with id 1c8638f2f847300df610d87031f8e189e8427b6386c6f297f4c5aab48844ef85 Dec 03 11:12:35 crc kubenswrapper[4756]: I1203 11:12:35.053133 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gn9pq"] Dec 03 11:12:35 crc kubenswrapper[4756]: I1203 11:12:35.062211 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gn9pq"] Dec 03 11:12:35 crc kubenswrapper[4756]: I1203 11:12:35.067787 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 11:12:35 crc kubenswrapper[4756]: I1203 11:12:35.250900 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2239bf3b-53a5-48da-ab28-fe323c8870cb" path="/var/lib/kubelet/pods/2239bf3b-53a5-48da-ab28-fe323c8870cb/volumes" Dec 03 11:12:35 crc kubenswrapper[4756]: I1203 11:12:35.664137 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 11:12:35 crc kubenswrapper[4756]: I1203 11:12:35.757915 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-269cd"] Dec 03 11:12:35 crc kubenswrapper[4756]: I1203 11:12:35.907889 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25a8024d-1033-41f9-a53f-6c5119388b40","Type":"ContainerStarted","Data":"e6c40e181353d536ca81f26c2db0423f6ae32fe61fd1f3b9c1e30fc93aebc413"} Dec 03 11:12:35 crc kubenswrapper[4756]: I1203 11:12:35.913889 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e429c7e2-748f-4231-902c-00290ebe9eb9","Type":"ContainerStarted","Data":"e7ce7f92622f2de304973aed54c032f43c3b27abb45b0a2ea00f4ba6d40aa3a8"} Dec 03 11:12:35 crc kubenswrapper[4756]: I1203 11:12:35.916980 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9772219e-2495-4892-8977-52360ac83b0a","Type":"ContainerStarted","Data":"bf8843eedbe597be2b735833c257f2c2ffc41e8b301ebfd5de12e9eca2712087"} Dec 03 11:12:35 crc kubenswrapper[4756]: I1203 11:12:35.929160 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"81dedd61-3ae8-42b1-8af2-20fe40b22eb7","Type":"ContainerStarted","Data":"1c8638f2f847300df610d87031f8e189e8427b6386c6f297f4c5aab48844ef85"} Dec 03 11:12:36 crc kubenswrapper[4756]: I1203 11:12:36.941575 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"77eb8ce5-5779-43bf-a57b-7ace73542f58","Type":"ContainerStarted","Data":"b2275e8062540cb9f4928b8ce035f5b374d239cf4c5e8b621b41f832417e4198"} Dec 03 11:12:36 crc kubenswrapper[4756]: I1203 11:12:36.943550 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-269cd" event={"ID":"1a408782-00c0-46f6-8559-023f8753699e","Type":"ContainerStarted","Data":"ed487c4d995978fcc220055c58b40d4660897bda27efaba4b06e869846b10057"} Dec 03 11:12:38 crc kubenswrapper[4756]: I1203 11:12:38.962064 4756 generic.go:334] "Generic (PLEG): container finished" podID="25a8024d-1033-41f9-a53f-6c5119388b40" containerID="e6c40e181353d536ca81f26c2db0423f6ae32fe61fd1f3b9c1e30fc93aebc413" exitCode=0 Dec 03 11:12:38 crc kubenswrapper[4756]: I1203 11:12:38.962161 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25a8024d-1033-41f9-a53f-6c5119388b40","Type":"ContainerDied","Data":"e6c40e181353d536ca81f26c2db0423f6ae32fe61fd1f3b9c1e30fc93aebc413"} Dec 03 11:12:40 crc kubenswrapper[4756]: I1203 11:12:40.982237 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"77eb8ce5-5779-43bf-a57b-7ace73542f58","Type":"ContainerStarted","Data":"21daeb257418ecfcbcc8b670015bc8ad58e72ad0ca8046f4fec341f74dab8fe8"} Dec 03 11:12:40 crc kubenswrapper[4756]: I1203 11:12:40.984763 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-269cd" event={"ID":"1a408782-00c0-46f6-8559-023f8753699e","Type":"ContainerStarted","Data":"21f4bec2b9de8c3aba067cf085c174c094177a9879fbd0870a2eca0d88e0d095"} Dec 03 11:12:40 crc kubenswrapper[4756]: I1203 11:12:40.987327 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"81dedd61-3ae8-42b1-8af2-20fe40b22eb7","Type":"ContainerStarted","Data":"6b6e0eb28932dbf2e956b8e87161ff1f6a6239b7b8f364dee6fbd1fb82c64419"} Dec 03 11:12:40 crc kubenswrapper[4756]: I1203 11:12:40.987565 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 03 11:12:40 crc kubenswrapper[4756]: I1203 11:12:40.989597 4756 generic.go:334] "Generic (PLEG): container finished" podID="44e97327-85df-405e-afa1-46b2105ccf65" containerID="f4a86505463837d37a1b1edc54b6b3fc62f5cd6b8c877ddfe2ebac539afe7e95" exitCode=0 Dec 03 11:12:40 crc kubenswrapper[4756]: I1203 11:12:40.989728 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" event={"ID":"44e97327-85df-405e-afa1-46b2105ccf65","Type":"ContainerDied","Data":"f4a86505463837d37a1b1edc54b6b3fc62f5cd6b8c877ddfe2ebac539afe7e95"} Dec 03 11:12:40 crc kubenswrapper[4756]: I1203 11:12:40.994995 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9772219e-2495-4892-8977-52360ac83b0a","Type":"ContainerStarted","Data":"bd3c1ac04dd6624fb6c52536a17f267f8249f437eca0c3b30306ce66a708cc99"} Dec 03 11:12:40 crc kubenswrapper[4756]: I1203 11:12:40.995524 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 03 11:12:41 crc kubenswrapper[4756]: I1203 11:12:41.000398 4756 generic.go:334] "Generic (PLEG): container finished" podID="1b8749e2-448f-47ea-88a5-fea22b3edf54" containerID="9684f75c7770d25b743efd03f5182057230ea84e46158061e22e641d1333c725" exitCode=0 Dec 03 11:12:41 crc kubenswrapper[4756]: I1203 11:12:41.000447 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" event={"ID":"1b8749e2-448f-47ea-88a5-fea22b3edf54","Type":"ContainerDied","Data":"9684f75c7770d25b743efd03f5182057230ea84e46158061e22e641d1333c725"} Dec 03 11:12:41 crc kubenswrapper[4756]: I1203 11:12:41.012589 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25a8024d-1033-41f9-a53f-6c5119388b40","Type":"ContainerStarted","Data":"95fdb397bb10a3b69cd34bcd17a3e05d410321c493a079ff0782f8ea50ec5334"} Dec 03 11:12:41 crc kubenswrapper[4756]: I1203 11:12:41.021879 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e429c7e2-748f-4231-902c-00290ebe9eb9","Type":"ContainerStarted","Data":"ce062ae4b044bab1e98c73c144817b317bb5ea4dd32bd73fe6b0e455bcc9957d"} Dec 03 11:12:41 crc kubenswrapper[4756]: I1203 11:12:41.029132 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xlz9h" event={"ID":"e033887b-a32e-4141-9812-455b70f85d39","Type":"ContainerStarted","Data":"559cf5b45faf06a78ad002804c82e60369709ec0e2fdd440d503c105bc89691b"} Dec 03 11:12:41 crc kubenswrapper[4756]: I1203 11:12:41.029296 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-xlz9h" Dec 03 11:12:41 crc kubenswrapper[4756]: I1203 11:12:41.045613 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=36.338267431 podStartE2EDuration="41.04558488s" podCreationTimestamp="2025-12-03 11:12:00 +0000 UTC" firstStartedPulling="2025-12-03 11:12:34.931052339 +0000 UTC m=+1165.961053583" lastFinishedPulling="2025-12-03 11:12:39.638369788 +0000 UTC m=+1170.668371032" observedRunningTime="2025-12-03 11:12:41.035348281 +0000 UTC m=+1172.065349525" watchObservedRunningTime="2025-12-03 11:12:41.04558488 +0000 UTC m=+1172.075586124" Dec 03 11:12:41 crc kubenswrapper[4756]: I1203 11:12:41.094337 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=42.545901962 podStartE2EDuration="43.09431019s" podCreationTimestamp="2025-12-03 11:11:58 +0000 UTC" firstStartedPulling="2025-12-03 11:12:34.769700759 +0000 UTC m=+1165.799701993" lastFinishedPulling="2025-12-03 11:12:35.318108977 +0000 UTC m=+1166.348110221" observedRunningTime="2025-12-03 11:12:41.084481463 +0000 UTC m=+1172.114482717" watchObservedRunningTime="2025-12-03 11:12:41.09431019 +0000 UTC m=+1172.124311434" Dec 03 11:12:41 crc kubenswrapper[4756]: I1203 11:12:41.104749 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=34.037301538 podStartE2EDuration="39.104715854s" podCreationTimestamp="2025-12-03 11:12:02 +0000 UTC" firstStartedPulling="2025-12-03 11:12:34.934001511 +0000 UTC m=+1165.964002755" lastFinishedPulling="2025-12-03 11:12:40.001415827 +0000 UTC m=+1171.031417071" observedRunningTime="2025-12-03 11:12:41.103850647 +0000 UTC m=+1172.133851911" watchObservedRunningTime="2025-12-03 11:12:41.104715854 +0000 UTC m=+1172.134717108" Dec 03 11:12:41 crc kubenswrapper[4756]: I1203 11:12:41.151987 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xlz9h" podStartSLOduration=30.958900614 podStartE2EDuration="36.151939516s" podCreationTimestamp="2025-12-03 11:12:05 +0000 UTC" firstStartedPulling="2025-12-03 11:12:34.808097156 +0000 UTC m=+1165.838098400" lastFinishedPulling="2025-12-03 11:12:40.001136058 +0000 UTC m=+1171.031137302" observedRunningTime="2025-12-03 11:12:41.149394157 +0000 UTC m=+1172.179395401" watchObservedRunningTime="2025-12-03 11:12:41.151939516 +0000 UTC m=+1172.181940760" Dec 03 11:12:42 crc kubenswrapper[4756]: I1203 11:12:42.043713 4756 generic.go:334] "Generic (PLEG): container finished" podID="1a408782-00c0-46f6-8559-023f8753699e" containerID="21f4bec2b9de8c3aba067cf085c174c094177a9879fbd0870a2eca0d88e0d095" exitCode=0 Dec 03 11:12:42 crc kubenswrapper[4756]: I1203 11:12:42.043865 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-269cd" event={"ID":"1a408782-00c0-46f6-8559-023f8753699e","Type":"ContainerDied","Data":"21f4bec2b9de8c3aba067cf085c174c094177a9879fbd0870a2eca0d88e0d095"} Dec 03 11:12:42 crc kubenswrapper[4756]: I1203 11:12:42.048285 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b8770c2d-c514-44e9-99d6-c8713f7f9ab1","Type":"ContainerStarted","Data":"94249f4558f22695e46c685db1ad9481d55734fa5f8731fab5c3968bd41e0d49"} Dec 03 11:12:42 crc kubenswrapper[4756]: I1203 11:12:42.060008 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" event={"ID":"44e97327-85df-405e-afa1-46b2105ccf65","Type":"ContainerStarted","Data":"e4d48de48733272bc0d4f2bcc48df70010dc4cfe449424dcaa30768c70fae073"} Dec 03 11:12:42 crc kubenswrapper[4756]: I1203 11:12:42.060729 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:12:42 crc kubenswrapper[4756]: I1203 11:12:42.064832 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" event={"ID":"1b8749e2-448f-47ea-88a5-fea22b3edf54","Type":"ContainerStarted","Data":"96953f225d9b160f1d1f8a3910cdf6456fde6527fab5fcea2a01b56836cd3fac"} Dec 03 11:12:42 crc kubenswrapper[4756]: I1203 11:12:42.125143 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" podStartSLOduration=3.309649072 podStartE2EDuration="46.125111536s" podCreationTimestamp="2025-12-03 11:11:56 +0000 UTC" firstStartedPulling="2025-12-03 11:11:57.244449506 +0000 UTC m=+1128.274450750" lastFinishedPulling="2025-12-03 11:12:40.05991197 +0000 UTC m=+1171.089913214" observedRunningTime="2025-12-03 11:12:42.125040365 +0000 UTC m=+1173.155041619" watchObservedRunningTime="2025-12-03 11:12:42.125111536 +0000 UTC m=+1173.155112780" Dec 03 11:12:42 crc kubenswrapper[4756]: I1203 11:12:42.148197 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" podStartSLOduration=3.95069468 podStartE2EDuration="47.148174145s" podCreationTimestamp="2025-12-03 11:11:55 +0000 UTC" firstStartedPulling="2025-12-03 11:11:56.840727909 +0000 UTC m=+1127.870729153" lastFinishedPulling="2025-12-03 11:12:40.038207374 +0000 UTC m=+1171.068208618" observedRunningTime="2025-12-03 11:12:42.144289255 +0000 UTC m=+1173.174290499" watchObservedRunningTime="2025-12-03 11:12:42.148174145 +0000 UTC m=+1173.178175389" Dec 03 11:12:43 crc kubenswrapper[4756]: I1203 11:12:43.082085 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4570f01f-6639-41a5-9201-c49ed4fdefa8","Type":"ContainerStarted","Data":"759a763cd38e4772ed66917e6f199f0a29ed3545148494464c028d937ead94ed"} Dec 03 11:12:43 crc kubenswrapper[4756]: I1203 11:12:43.090102 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-269cd" event={"ID":"1a408782-00c0-46f6-8559-023f8753699e","Type":"ContainerStarted","Data":"f23fa67d9ff189536b712ff781a70ecce9a030a5656ef40a5b1137a560ed2163"} Dec 03 11:12:44 crc kubenswrapper[4756]: I1203 11:12:44.102113 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"77eb8ce5-5779-43bf-a57b-7ace73542f58","Type":"ContainerStarted","Data":"f388c039d57a2c6ae581aea2adaeda44c3bb1314902ef7b96ccd9a84c8183d40"} Dec 03 11:12:44 crc kubenswrapper[4756]: I1203 11:12:44.106705 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-269cd" event={"ID":"1a408782-00c0-46f6-8559-023f8753699e","Type":"ContainerStarted","Data":"55bf18d752ead6bcc7e46590d483585687b13e7db54edc1239c0d60cd7757075"} Dec 03 11:12:44 crc kubenswrapper[4756]: I1203 11:12:44.106839 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:44 crc kubenswrapper[4756]: I1203 11:12:44.106890 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:12:44 crc kubenswrapper[4756]: I1203 11:12:44.109059 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e429c7e2-748f-4231-902c-00290ebe9eb9","Type":"ContainerStarted","Data":"3266709d38a8a9d7ab74bd0c90ce25d5ae5686225afbd9efb3b7cb1ae56e5c4d"} Dec 03 11:12:44 crc kubenswrapper[4756]: I1203 11:12:44.124406 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=28.307463172 podStartE2EDuration="36.124384367s" podCreationTimestamp="2025-12-03 11:12:08 +0000 UTC" firstStartedPulling="2025-12-03 11:12:35.979893269 +0000 UTC m=+1167.009894513" lastFinishedPulling="2025-12-03 11:12:43.796814464 +0000 UTC m=+1174.826815708" observedRunningTime="2025-12-03 11:12:44.12060735 +0000 UTC m=+1175.150608594" watchObservedRunningTime="2025-12-03 11:12:44.124384367 +0000 UTC m=+1175.154385611" Dec 03 11:12:44 crc kubenswrapper[4756]: I1203 11:12:44.160887 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=29.427984301 podStartE2EDuration="38.160857904s" podCreationTimestamp="2025-12-03 11:12:06 +0000 UTC" firstStartedPulling="2025-12-03 11:12:35.078986141 +0000 UTC m=+1166.108987385" lastFinishedPulling="2025-12-03 11:12:43.811859744 +0000 UTC m=+1174.841860988" observedRunningTime="2025-12-03 11:12:44.152730861 +0000 UTC m=+1175.182732105" watchObservedRunningTime="2025-12-03 11:12:44.160857904 +0000 UTC m=+1175.190859148" Dec 03 11:12:44 crc kubenswrapper[4756]: I1203 11:12:44.180703 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-269cd" podStartSLOduration=35.248059487 podStartE2EDuration="39.180678542s" podCreationTimestamp="2025-12-03 11:12:05 +0000 UTC" firstStartedPulling="2025-12-03 11:12:35.979746495 +0000 UTC m=+1167.009747739" lastFinishedPulling="2025-12-03 11:12:39.91236554 +0000 UTC m=+1170.942366794" observedRunningTime="2025-12-03 11:12:44.171148236 +0000 UTC m=+1175.201149480" watchObservedRunningTime="2025-12-03 11:12:44.180678542 +0000 UTC m=+1175.210679786" Dec 03 11:12:45 crc kubenswrapper[4756]: I1203 11:12:45.134459 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:45 crc kubenswrapper[4756]: I1203 11:12:45.767457 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 03 11:12:46 crc kubenswrapper[4756]: I1203 11:12:46.135937 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:46 crc kubenswrapper[4756]: I1203 11:12:46.174518 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:46 crc kubenswrapper[4756]: I1203 11:12:46.196857 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:12:46 crc kubenswrapper[4756]: I1203 11:12:46.198289 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:12:46 crc kubenswrapper[4756]: I1203 11:12:46.573209 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:12:46 crc kubenswrapper[4756]: I1203 11:12:46.644698 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-b8kgq"] Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.049857 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.094984 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.132462 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.167169 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.169984 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.369027 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-zjc57"] Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.370889 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.377547 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.388552 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-zjc57"] Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.446763 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-9vc6f"] Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.448841 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.452829 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.462412 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9vc6f"] Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.492492 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-config\") pod \"dnsmasq-dns-7fd796d7df-zjc57\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.493485 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsdkn\" (UniqueName: \"kubernetes.io/projected/0cc611a2-f64e-4fc0-994a-760b3597462a-kube-api-access-dsdkn\") pod \"dnsmasq-dns-7fd796d7df-zjc57\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.493577 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-zjc57\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.493636 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-zjc57\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.596156 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7b8a2775-a311-44e5-80da-356fcba8da63-ovs-rundir\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.596215 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8a2775-a311-44e5-80da-356fcba8da63-combined-ca-bundle\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.596263 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbd48\" (UniqueName: \"kubernetes.io/projected/7b8a2775-a311-44e5-80da-356fcba8da63-kube-api-access-rbd48\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.596379 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-zjc57\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.596413 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b8a2775-a311-44e5-80da-356fcba8da63-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.596464 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-zjc57\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.596511 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7b8a2775-a311-44e5-80da-356fcba8da63-ovn-rundir\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.596537 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-config\") pod \"dnsmasq-dns-7fd796d7df-zjc57\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.596602 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b8a2775-a311-44e5-80da-356fcba8da63-config\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.596639 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsdkn\" (UniqueName: \"kubernetes.io/projected/0cc611a2-f64e-4fc0-994a-760b3597462a-kube-api-access-dsdkn\") pod \"dnsmasq-dns-7fd796d7df-zjc57\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.597534 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-zjc57\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.597634 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-zjc57\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.598063 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-config\") pod \"dnsmasq-dns-7fd796d7df-zjc57\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.612480 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.614521 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.617895 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.618177 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-qx892" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.618405 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.618601 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.624093 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsdkn\" (UniqueName: \"kubernetes.io/projected/0cc611a2-f64e-4fc0-994a-760b3597462a-kube-api-access-dsdkn\") pod \"dnsmasq-dns-7fd796d7df-zjc57\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.628243 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.656860 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-zjc57"] Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.657890 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.702913 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7b8a2775-a311-44e5-80da-356fcba8da63-ovs-rundir\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.703577 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8a2775-a311-44e5-80da-356fcba8da63-combined-ca-bundle\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.703717 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0976760e-f227-4d4e-a8a3-ed0ac129702c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.703880 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbd48\" (UniqueName: \"kubernetes.io/projected/7b8a2775-a311-44e5-80da-356fcba8da63-kube-api-access-rbd48\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.704116 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0976760e-f227-4d4e-a8a3-ed0ac129702c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.704266 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b8a2775-a311-44e5-80da-356fcba8da63-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.704490 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0976760e-f227-4d4e-a8a3-ed0ac129702c-scripts\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.704636 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0976760e-f227-4d4e-a8a3-ed0ac129702c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.704873 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7b8a2775-a311-44e5-80da-356fcba8da63-ovn-rundir\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.705077 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbnhc\" (UniqueName: \"kubernetes.io/projected/0976760e-f227-4d4e-a8a3-ed0ac129702c-kube-api-access-sbnhc\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.705321 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0976760e-f227-4d4e-a8a3-ed0ac129702c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.705454 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0976760e-f227-4d4e-a8a3-ed0ac129702c-config\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.705584 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b8a2775-a311-44e5-80da-356fcba8da63-config\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.705468 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7b8a2775-a311-44e5-80da-356fcba8da63-ovs-rundir\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.707003 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7b8a2775-a311-44e5-80da-356fcba8da63-ovn-rundir\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.707522 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b8a2775-a311-44e5-80da-356fcba8da63-config\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.728247 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b8a2775-a311-44e5-80da-356fcba8da63-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.742795 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8a2775-a311-44e5-80da-356fcba8da63-combined-ca-bundle\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.756590 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mtvr7"] Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.758320 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.779322 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbd48\" (UniqueName: \"kubernetes.io/projected/7b8a2775-a311-44e5-80da-356fcba8da63-kube-api-access-rbd48\") pod \"ovn-controller-metrics-9vc6f\" (UID: \"7b8a2775-a311-44e5-80da-356fcba8da63\") " pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.782211 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.783647 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9vc6f" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.808596 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mtvr7"] Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.827621 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0976760e-f227-4d4e-a8a3-ed0ac129702c-scripts\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.827678 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0976760e-f227-4d4e-a8a3-ed0ac129702c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.827788 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbnhc\" (UniqueName: \"kubernetes.io/projected/0976760e-f227-4d4e-a8a3-ed0ac129702c-kube-api-access-sbnhc\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.827883 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0976760e-f227-4d4e-a8a3-ed0ac129702c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.827911 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0976760e-f227-4d4e-a8a3-ed0ac129702c-config\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.836115 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0976760e-f227-4d4e-a8a3-ed0ac129702c-scripts\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.838201 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0976760e-f227-4d4e-a8a3-ed0ac129702c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.838353 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0976760e-f227-4d4e-a8a3-ed0ac129702c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.838979 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0976760e-f227-4d4e-a8a3-ed0ac129702c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.839998 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0976760e-f227-4d4e-a8a3-ed0ac129702c-config\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.843896 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0976760e-f227-4d4e-a8a3-ed0ac129702c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.846383 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0976760e-f227-4d4e-a8a3-ed0ac129702c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.847667 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0976760e-f227-4d4e-a8a3-ed0ac129702c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.885117 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbnhc\" (UniqueName: \"kubernetes.io/projected/0976760e-f227-4d4e-a8a3-ed0ac129702c-kube-api-access-sbnhc\") pod \"ovn-northd-0\" (UID: \"0976760e-f227-4d4e-a8a3-ed0ac129702c\") " pod="openstack/ovn-northd-0" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.949191 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-config\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.949759 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.949791 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.949863 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thg94\" (UniqueName: \"kubernetes.io/projected/a512a68d-1e73-40eb-a69b-501880734832-kube-api-access-thg94\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.949887 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:47 crc kubenswrapper[4756]: I1203 11:12:47.987498 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.051319 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thg94\" (UniqueName: \"kubernetes.io/projected/a512a68d-1e73-40eb-a69b-501880734832-kube-api-access-thg94\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.051393 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.051518 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-config\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.051550 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.051583 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.052850 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.053705 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.054161 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-config\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.054941 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.085524 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thg94\" (UniqueName: \"kubernetes.io/projected/a512a68d-1e73-40eb-a69b-501880734832-kube-api-access-thg94\") pod \"dnsmasq-dns-86db49b7ff-mtvr7\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.143661 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" podUID="1b8749e2-448f-47ea-88a5-fea22b3edf54" containerName="dnsmasq-dns" containerID="cri-o://96953f225d9b160f1d1f8a3910cdf6456fde6527fab5fcea2a01b56836cd3fac" gracePeriod=10 Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.313989 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.686627 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9vc6f"] Dec 03 11:12:48 crc kubenswrapper[4756]: W1203 11:12:48.699999 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b8a2775_a311_44e5_80da_356fcba8da63.slice/crio-e57c578faec3638d9b297c29f2875d8eb826f3daaac21f02e9e30cc28d176178 WatchSource:0}: Error finding container e57c578faec3638d9b297c29f2875d8eb826f3daaac21f02e9e30cc28d176178: Status 404 returned error can't find the container with id e57c578faec3638d9b297c29f2875d8eb826f3daaac21f02e9e30cc28d176178 Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.763698 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-zjc57"] Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.865509 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.984920 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b8749e2-448f-47ea-88a5-fea22b3edf54-dns-svc\") pod \"1b8749e2-448f-47ea-88a5-fea22b3edf54\" (UID: \"1b8749e2-448f-47ea-88a5-fea22b3edf54\") " Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.985377 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g745s\" (UniqueName: \"kubernetes.io/projected/1b8749e2-448f-47ea-88a5-fea22b3edf54-kube-api-access-g745s\") pod \"1b8749e2-448f-47ea-88a5-fea22b3edf54\" (UID: \"1b8749e2-448f-47ea-88a5-fea22b3edf54\") " Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.985509 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b8749e2-448f-47ea-88a5-fea22b3edf54-config\") pod \"1b8749e2-448f-47ea-88a5-fea22b3edf54\" (UID: \"1b8749e2-448f-47ea-88a5-fea22b3edf54\") " Dec 03 11:12:48 crc kubenswrapper[4756]: I1203 11:12:48.988397 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.008531 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b8749e2-448f-47ea-88a5-fea22b3edf54-kube-api-access-g745s" (OuterVolumeSpecName: "kube-api-access-g745s") pod "1b8749e2-448f-47ea-88a5-fea22b3edf54" (UID: "1b8749e2-448f-47ea-88a5-fea22b3edf54"). InnerVolumeSpecName "kube-api-access-g745s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.028090 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mtvr7"] Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.066629 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b8749e2-448f-47ea-88a5-fea22b3edf54-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b8749e2-448f-47ea-88a5-fea22b3edf54" (UID: "1b8749e2-448f-47ea-88a5-fea22b3edf54"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.090095 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b8749e2-448f-47ea-88a5-fea22b3edf54-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.090650 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g745s\" (UniqueName: \"kubernetes.io/projected/1b8749e2-448f-47ea-88a5-fea22b3edf54-kube-api-access-g745s\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.093008 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b8749e2-448f-47ea-88a5-fea22b3edf54-config" (OuterVolumeSpecName: "config") pod "1b8749e2-448f-47ea-88a5-fea22b3edf54" (UID: "1b8749e2-448f-47ea-88a5-fea22b3edf54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.169821 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0976760e-f227-4d4e-a8a3-ed0ac129702c","Type":"ContainerStarted","Data":"0dae8b02f2cf433dde3a77e84f7ee5e35b09127e93d463ae6653696496f89be4"} Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.173608 4756 generic.go:334] "Generic (PLEG): container finished" podID="0cc611a2-f64e-4fc0-994a-760b3597462a" containerID="199e304efcf11fc4a73fc6700b0f2e4bd41a9bb9c73819fbcf804d4d0fd743c0" exitCode=0 Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.173758 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" event={"ID":"0cc611a2-f64e-4fc0-994a-760b3597462a","Type":"ContainerDied","Data":"199e304efcf11fc4a73fc6700b0f2e4bd41a9bb9c73819fbcf804d4d0fd743c0"} Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.173810 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" event={"ID":"0cc611a2-f64e-4fc0-994a-760b3597462a","Type":"ContainerStarted","Data":"74f6a5558584fbae9b50f65acf7a967662be98016a2a293e2d9bdeed4a6153e9"} Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.179574 4756 generic.go:334] "Generic (PLEG): container finished" podID="1b8749e2-448f-47ea-88a5-fea22b3edf54" containerID="96953f225d9b160f1d1f8a3910cdf6456fde6527fab5fcea2a01b56836cd3fac" exitCode=0 Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.179631 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" event={"ID":"1b8749e2-448f-47ea-88a5-fea22b3edf54","Type":"ContainerDied","Data":"96953f225d9b160f1d1f8a3910cdf6456fde6527fab5fcea2a01b56836cd3fac"} Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.179721 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.179753 4756 scope.go:117] "RemoveContainer" containerID="96953f225d9b160f1d1f8a3910cdf6456fde6527fab5fcea2a01b56836cd3fac" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.179708 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-b8kgq" event={"ID":"1b8749e2-448f-47ea-88a5-fea22b3edf54","Type":"ContainerDied","Data":"3e1b52c4f8dd46a9b030d3148c2af79308b62c66ccf347f46cb575a6b0412ba6"} Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.181305 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9vc6f" event={"ID":"7b8a2775-a311-44e5-80da-356fcba8da63","Type":"ContainerStarted","Data":"e7e07975bfd070ce977041c9a347925623ba8a67ea1d6e650e1dab5ed5b26c6e"} Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.181410 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9vc6f" event={"ID":"7b8a2775-a311-44e5-80da-356fcba8da63","Type":"ContainerStarted","Data":"e57c578faec3638d9b297c29f2875d8eb826f3daaac21f02e9e30cc28d176178"} Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.185206 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d6980bf2-fd5f-4cb1-b148-414229444006","Type":"ContainerStarted","Data":"0916fb5e47031604c4dab3f2ca0cf519acf58f558ac3a11f886178a65d8b1b62"} Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.188720 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" event={"ID":"a512a68d-1e73-40eb-a69b-501880734832","Type":"ContainerStarted","Data":"d71faa33492cd543d4f993633435e5877a8e52f6ae90273d104460e992efd72c"} Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.192821 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b8749e2-448f-47ea-88a5-fea22b3edf54-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.216405 4756 scope.go:117] "RemoveContainer" containerID="9684f75c7770d25b743efd03f5182057230ea84e46158061e22e641d1333c725" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.257141 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-9vc6f" podStartSLOduration=2.257110519 podStartE2EDuration="2.257110519s" podCreationTimestamp="2025-12-03 11:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:12:49.252221547 +0000 UTC m=+1180.282222791" watchObservedRunningTime="2025-12-03 11:12:49.257110519 +0000 UTC m=+1180.287111763" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.262431 4756 scope.go:117] "RemoveContainer" containerID="96953f225d9b160f1d1f8a3910cdf6456fde6527fab5fcea2a01b56836cd3fac" Dec 03 11:12:49 crc kubenswrapper[4756]: E1203 11:12:49.266529 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96953f225d9b160f1d1f8a3910cdf6456fde6527fab5fcea2a01b56836cd3fac\": container with ID starting with 96953f225d9b160f1d1f8a3910cdf6456fde6527fab5fcea2a01b56836cd3fac not found: ID does not exist" containerID="96953f225d9b160f1d1f8a3910cdf6456fde6527fab5fcea2a01b56836cd3fac" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.266624 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96953f225d9b160f1d1f8a3910cdf6456fde6527fab5fcea2a01b56836cd3fac"} err="failed to get container status \"96953f225d9b160f1d1f8a3910cdf6456fde6527fab5fcea2a01b56836cd3fac\": rpc error: code = NotFound desc = could not find container \"96953f225d9b160f1d1f8a3910cdf6456fde6527fab5fcea2a01b56836cd3fac\": container with ID starting with 96953f225d9b160f1d1f8a3910cdf6456fde6527fab5fcea2a01b56836cd3fac not found: ID does not exist" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.266663 4756 scope.go:117] "RemoveContainer" containerID="9684f75c7770d25b743efd03f5182057230ea84e46158061e22e641d1333c725" Dec 03 11:12:49 crc kubenswrapper[4756]: E1203 11:12:49.273192 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9684f75c7770d25b743efd03f5182057230ea84e46158061e22e641d1333c725\": container with ID starting with 9684f75c7770d25b743efd03f5182057230ea84e46158061e22e641d1333c725 not found: ID does not exist" containerID="9684f75c7770d25b743efd03f5182057230ea84e46158061e22e641d1333c725" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.273270 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9684f75c7770d25b743efd03f5182057230ea84e46158061e22e641d1333c725"} err="failed to get container status \"9684f75c7770d25b743efd03f5182057230ea84e46158061e22e641d1333c725\": rpc error: code = NotFound desc = could not find container \"9684f75c7770d25b743efd03f5182057230ea84e46158061e22e641d1333c725\": container with ID starting with 9684f75c7770d25b743efd03f5182057230ea84e46158061e22e641d1333c725 not found: ID does not exist" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.282708 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-b8kgq"] Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.295797 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-b8kgq"] Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.516335 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.705404 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-dns-svc\") pod \"0cc611a2-f64e-4fc0-994a-760b3597462a\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.705528 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-ovsdbserver-nb\") pod \"0cc611a2-f64e-4fc0-994a-760b3597462a\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.705656 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsdkn\" (UniqueName: \"kubernetes.io/projected/0cc611a2-f64e-4fc0-994a-760b3597462a-kube-api-access-dsdkn\") pod \"0cc611a2-f64e-4fc0-994a-760b3597462a\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.705689 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-config\") pod \"0cc611a2-f64e-4fc0-994a-760b3597462a\" (UID: \"0cc611a2-f64e-4fc0-994a-760b3597462a\") " Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.716304 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cc611a2-f64e-4fc0-994a-760b3597462a-kube-api-access-dsdkn" (OuterVolumeSpecName: "kube-api-access-dsdkn") pod "0cc611a2-f64e-4fc0-994a-760b3597462a" (UID: "0cc611a2-f64e-4fc0-994a-760b3597462a"). InnerVolumeSpecName "kube-api-access-dsdkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.738085 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-config" (OuterVolumeSpecName: "config") pod "0cc611a2-f64e-4fc0-994a-760b3597462a" (UID: "0cc611a2-f64e-4fc0-994a-760b3597462a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.739327 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0cc611a2-f64e-4fc0-994a-760b3597462a" (UID: "0cc611a2-f64e-4fc0-994a-760b3597462a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.739811 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0cc611a2-f64e-4fc0-994a-760b3597462a" (UID: "0cc611a2-f64e-4fc0-994a-760b3597462a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.807691 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.808408 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.808427 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsdkn\" (UniqueName: \"kubernetes.io/projected/0cc611a2-f64e-4fc0-994a-760b3597462a-kube-api-access-dsdkn\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:49 crc kubenswrapper[4756]: I1203 11:12:49.808442 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc611a2-f64e-4fc0-994a-760b3597462a-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:50 crc kubenswrapper[4756]: I1203 11:12:50.203419 4756 generic.go:334] "Generic (PLEG): container finished" podID="a512a68d-1e73-40eb-a69b-501880734832" containerID="14e659a0837d7e6de24ac44a0327ad9d4b6349190180aaaf7261695ea52471cd" exitCode=0 Dec 03 11:12:50 crc kubenswrapper[4756]: I1203 11:12:50.203540 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" event={"ID":"a512a68d-1e73-40eb-a69b-501880734832","Type":"ContainerDied","Data":"14e659a0837d7e6de24ac44a0327ad9d4b6349190180aaaf7261695ea52471cd"} Dec 03 11:12:50 crc kubenswrapper[4756]: I1203 11:12:50.206711 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" event={"ID":"0cc611a2-f64e-4fc0-994a-760b3597462a","Type":"ContainerDied","Data":"74f6a5558584fbae9b50f65acf7a967662be98016a2a293e2d9bdeed4a6153e9"} Dec 03 11:12:50 crc kubenswrapper[4756]: I1203 11:12:50.206796 4756 scope.go:117] "RemoveContainer" containerID="199e304efcf11fc4a73fc6700b0f2e4bd41a9bb9c73819fbcf804d4d0fd743c0" Dec 03 11:12:50 crc kubenswrapper[4756]: I1203 11:12:50.206727 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-zjc57" Dec 03 11:12:50 crc kubenswrapper[4756]: I1203 11:12:50.371911 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-zjc57"] Dec 03 11:12:50 crc kubenswrapper[4756]: I1203 11:12:50.384707 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-zjc57"] Dec 03 11:12:50 crc kubenswrapper[4756]: I1203 11:12:50.532764 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:50 crc kubenswrapper[4756]: I1203 11:12:50.533250 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:50 crc kubenswrapper[4756]: I1203 11:12:50.633347 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:51 crc kubenswrapper[4756]: I1203 11:12:51.219818 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" event={"ID":"a512a68d-1e73-40eb-a69b-501880734832","Type":"ContainerStarted","Data":"d8f8ed87db540ca845520f6e39eddd55b951f9aa857ead35a4b5fddc7752e19f"} Dec 03 11:12:51 crc kubenswrapper[4756]: I1203 11:12:51.220002 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:51 crc kubenswrapper[4756]: I1203 11:12:51.224451 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0976760e-f227-4d4e-a8a3-ed0ac129702c","Type":"ContainerStarted","Data":"dd98546d761f394903a18ed624322ecc04ba52ff6610927e9c2ab69eea79b4cd"} Dec 03 11:12:51 crc kubenswrapper[4756]: I1203 11:12:51.224596 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0976760e-f227-4d4e-a8a3-ed0ac129702c","Type":"ContainerStarted","Data":"ee2b4bf2a95343cb3954e4f1083e31e545319fcdcc58481f350d3fc9235e4837"} Dec 03 11:12:51 crc kubenswrapper[4756]: I1203 11:12:51.225656 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 03 11:12:51 crc kubenswrapper[4756]: I1203 11:12:51.248536 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cc611a2-f64e-4fc0-994a-760b3597462a" path="/var/lib/kubelet/pods/0cc611a2-f64e-4fc0-994a-760b3597462a/volumes" Dec 03 11:12:51 crc kubenswrapper[4756]: I1203 11:12:51.249202 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b8749e2-448f-47ea-88a5-fea22b3edf54" path="/var/lib/kubelet/pods/1b8749e2-448f-47ea-88a5-fea22b3edf54/volumes" Dec 03 11:12:51 crc kubenswrapper[4756]: I1203 11:12:51.252562 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" podStartSLOduration=4.25253245 podStartE2EDuration="4.25253245s" podCreationTimestamp="2025-12-03 11:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:12:51.24676691 +0000 UTC m=+1182.276768174" watchObservedRunningTime="2025-12-03 11:12:51.25253245 +0000 UTC m=+1182.282533694" Dec 03 11:12:51 crc kubenswrapper[4756]: I1203 11:12:51.279376 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.011552051 podStartE2EDuration="4.279342716s" podCreationTimestamp="2025-12-03 11:12:47 +0000 UTC" firstStartedPulling="2025-12-03 11:12:49.008698305 +0000 UTC m=+1180.038699549" lastFinishedPulling="2025-12-03 11:12:50.27648897 +0000 UTC m=+1181.306490214" observedRunningTime="2025-12-03 11:12:51.271456769 +0000 UTC m=+1182.301458043" watchObservedRunningTime="2025-12-03 11:12:51.279342716 +0000 UTC m=+1182.309343960" Dec 03 11:12:51 crc kubenswrapper[4756]: I1203 11:12:51.312291 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.246947 4756 generic.go:334] "Generic (PLEG): container finished" podID="d6980bf2-fd5f-4cb1-b148-414229444006" containerID="0916fb5e47031604c4dab3f2ca0cf519acf58f558ac3a11f886178a65d8b1b62" exitCode=0 Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.247135 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d6980bf2-fd5f-4cb1-b148-414229444006","Type":"ContainerDied","Data":"0916fb5e47031604c4dab3f2ca0cf519acf58f558ac3a11f886178a65d8b1b62"} Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.538471 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.608086 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.608616 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.608695 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.609806 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"664265e67f7670380e0f58f5627ec2e4920ce372619dd56f4c84d4b06cd1734c"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.609879 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://664265e67f7670380e0f58f5627ec2e4920ce372619dd56f4c84d4b06cd1734c" gracePeriod=600 Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.653633 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mtvr7"] Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.697247 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-ggh4n"] Dec 03 11:12:52 crc kubenswrapper[4756]: E1203 11:12:52.697739 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8749e2-448f-47ea-88a5-fea22b3edf54" containerName="dnsmasq-dns" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.697757 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8749e2-448f-47ea-88a5-fea22b3edf54" containerName="dnsmasq-dns" Dec 03 11:12:52 crc kubenswrapper[4756]: E1203 11:12:52.697773 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8749e2-448f-47ea-88a5-fea22b3edf54" containerName="init" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.697778 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8749e2-448f-47ea-88a5-fea22b3edf54" containerName="init" Dec 03 11:12:52 crc kubenswrapper[4756]: E1203 11:12:52.697794 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cc611a2-f64e-4fc0-994a-760b3597462a" containerName="init" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.697803 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cc611a2-f64e-4fc0-994a-760b3597462a" containerName="init" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.698018 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cc611a2-f64e-4fc0-994a-760b3597462a" containerName="init" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.698033 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b8749e2-448f-47ea-88a5-fea22b3edf54" containerName="dnsmasq-dns" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.699086 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.712867 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ggh4n"] Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.810371 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-config\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.810456 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-dns-svc\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.810537 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.810627 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nplqr\" (UniqueName: \"kubernetes.io/projected/40e70be6-c17e-4601-a018-1a708bb91d13-kube-api-access-nplqr\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.810754 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.912923 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.913094 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-config\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.914096 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.914107 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-config\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.913125 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-dns-svc\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.914199 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.914255 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nplqr\" (UniqueName: \"kubernetes.io/projected/40e70be6-c17e-4601-a018-1a708bb91d13-kube-api-access-nplqr\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.914315 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-dns-svc\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.914893 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:52 crc kubenswrapper[4756]: I1203 11:12:52.936236 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nplqr\" (UniqueName: \"kubernetes.io/projected/40e70be6-c17e-4601-a018-1a708bb91d13-kube-api-access-nplqr\") pod \"dnsmasq-dns-698758b865-ggh4n\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.065972 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.308088 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d6980bf2-fd5f-4cb1-b148-414229444006","Type":"ContainerStarted","Data":"7ea8e49da1aeb77f9ae461ba90d9b5444df341110895110dad0d33df491888e7"} Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.316512 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="664265e67f7670380e0f58f5627ec2e4920ce372619dd56f4c84d4b06cd1734c" exitCode=0 Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.316812 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" podUID="a512a68d-1e73-40eb-a69b-501880734832" containerName="dnsmasq-dns" containerID="cri-o://d8f8ed87db540ca845520f6e39eddd55b951f9aa857ead35a4b5fddc7752e19f" gracePeriod=10 Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.317113 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"664265e67f7670380e0f58f5627ec2e4920ce372619dd56f4c84d4b06cd1734c"} Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.317158 4756 scope.go:117] "RemoveContainer" containerID="09868558298ce0e0fe5ddedbb9f992422ad279637132aad7bc1ac485611cc892" Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.564320 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371980.290487 podStartE2EDuration="56.564288492s" podCreationTimestamp="2025-12-03 11:11:57 +0000 UTC" firstStartedPulling="2025-12-03 11:11:59.429971863 +0000 UTC m=+1130.459973107" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:12:53.352153408 +0000 UTC m=+1184.382154652" watchObservedRunningTime="2025-12-03 11:12:53.564288492 +0000 UTC m=+1184.594289736" Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.573939 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ggh4n"] Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.836381 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.846848 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.850090 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.854835 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.855276 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.855582 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-p4vv4" Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.858310 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.939907 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/99507e0d-929b-4d13-b820-5fd2869d776e-cache\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.940590 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/99507e0d-929b-4d13-b820-5fd2869d776e-lock\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.940651 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.940673 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrqlg\" (UniqueName: \"kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-kube-api-access-hrqlg\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.941452 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:53 crc kubenswrapper[4756]: I1203 11:12:53.949760 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:54 crc kubenswrapper[4756]: I1203 11:12:54.042537 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thg94\" (UniqueName: \"kubernetes.io/projected/a512a68d-1e73-40eb-a69b-501880734832-kube-api-access-thg94\") pod \"a512a68d-1e73-40eb-a69b-501880734832\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " Dec 03 11:12:54 crc kubenswrapper[4756]: I1203 11:12:54.042738 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-dns-svc\") pod \"a512a68d-1e73-40eb-a69b-501880734832\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " Dec 03 11:12:54 crc kubenswrapper[4756]: I1203 11:12:54.042849 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-ovsdbserver-sb\") pod \"a512a68d-1e73-40eb-a69b-501880734832\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " Dec 03 11:12:54 crc kubenswrapper[4756]: I1203 11:12:54.042919 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-config\") pod \"a512a68d-1e73-40eb-a69b-501880734832\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " Dec 03 11:12:54 crc kubenswrapper[4756]: I1203 11:12:54.043073 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-ovsdbserver-nb\") pod \"a512a68d-1e73-40eb-a69b-501880734832\" (UID: \"a512a68d-1e73-40eb-a69b-501880734832\") " Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.043401 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/99507e0d-929b-4d13-b820-5fd2869d776e-cache\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.043451 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/99507e0d-929b-4d13-b820-5fd2869d776e-lock\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.043489 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.043510 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrqlg\" (UniqueName: \"kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-kube-api-access-hrqlg\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.043590 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:55 crc kubenswrapper[4756]: E1203 11:12:54.044212 4756 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 11:12:55 crc kubenswrapper[4756]: E1203 11:12:54.044259 4756 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 11:12:55 crc kubenswrapper[4756]: E1203 11:12:54.044369 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift podName:99507e0d-929b-4d13-b820-5fd2869d776e nodeName:}" failed. No retries permitted until 2025-12-03 11:12:54.544336509 +0000 UTC m=+1185.574337753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift") pod "swift-storage-0" (UID: "99507e0d-929b-4d13-b820-5fd2869d776e") : configmap "swift-ring-files" not found Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.044635 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/99507e0d-929b-4d13-b820-5fd2869d776e-cache\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.045149 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/99507e0d-929b-4d13-b820-5fd2869d776e-lock\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.045345 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.050719 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a512a68d-1e73-40eb-a69b-501880734832-kube-api-access-thg94" (OuterVolumeSpecName: "kube-api-access-thg94") pod "a512a68d-1e73-40eb-a69b-501880734832" (UID: "a512a68d-1e73-40eb-a69b-501880734832"). InnerVolumeSpecName "kube-api-access-thg94". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.069606 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrqlg\" (UniqueName: \"kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-kube-api-access-hrqlg\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.077010 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.092394 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a512a68d-1e73-40eb-a69b-501880734832" (UID: "a512a68d-1e73-40eb-a69b-501880734832"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.096126 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-config" (OuterVolumeSpecName: "config") pod "a512a68d-1e73-40eb-a69b-501880734832" (UID: "a512a68d-1e73-40eb-a69b-501880734832"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.098588 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a512a68d-1e73-40eb-a69b-501880734832" (UID: "a512a68d-1e73-40eb-a69b-501880734832"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.106377 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a512a68d-1e73-40eb-a69b-501880734832" (UID: "a512a68d-1e73-40eb-a69b-501880734832"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.146371 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.146414 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thg94\" (UniqueName: \"kubernetes.io/projected/a512a68d-1e73-40eb-a69b-501880734832-kube-api-access-thg94\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.146427 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.146436 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.146446 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a512a68d-1e73-40eb-a69b-501880734832-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.329091 4756 generic.go:334] "Generic (PLEG): container finished" podID="a512a68d-1e73-40eb-a69b-501880734832" containerID="d8f8ed87db540ca845520f6e39eddd55b951f9aa857ead35a4b5fddc7752e19f" exitCode=0 Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.329196 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" event={"ID":"a512a68d-1e73-40eb-a69b-501880734832","Type":"ContainerDied","Data":"d8f8ed87db540ca845520f6e39eddd55b951f9aa857ead35a4b5fddc7752e19f"} Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.329235 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" event={"ID":"a512a68d-1e73-40eb-a69b-501880734832","Type":"ContainerDied","Data":"d71faa33492cd543d4f993633435e5877a8e52f6ae90273d104460e992efd72c"} Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.329254 4756 scope.go:117] "RemoveContainer" containerID="d8f8ed87db540ca845520f6e39eddd55b951f9aa857ead35a4b5fddc7752e19f" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.329368 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-mtvr7" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.340923 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ggh4n" event={"ID":"40e70be6-c17e-4601-a018-1a708bb91d13","Type":"ContainerStarted","Data":"77fde13494b5fe89ec55061635bdb34f2aafa2d13ff3cd3b60dfa557ba44e058"} Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.365455 4756 scope.go:117] "RemoveContainer" containerID="14e659a0837d7e6de24ac44a0327ad9d4b6349190180aaaf7261695ea52471cd" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.389180 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mtvr7"] Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.404081 4756 scope.go:117] "RemoveContainer" containerID="d8f8ed87db540ca845520f6e39eddd55b951f9aa857ead35a4b5fddc7752e19f" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.404254 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mtvr7"] Dec 03 11:12:55 crc kubenswrapper[4756]: E1203 11:12:54.404640 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8f8ed87db540ca845520f6e39eddd55b951f9aa857ead35a4b5fddc7752e19f\": container with ID starting with d8f8ed87db540ca845520f6e39eddd55b951f9aa857ead35a4b5fddc7752e19f not found: ID does not exist" containerID="d8f8ed87db540ca845520f6e39eddd55b951f9aa857ead35a4b5fddc7752e19f" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.404685 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8f8ed87db540ca845520f6e39eddd55b951f9aa857ead35a4b5fddc7752e19f"} err="failed to get container status \"d8f8ed87db540ca845520f6e39eddd55b951f9aa857ead35a4b5fddc7752e19f\": rpc error: code = NotFound desc = could not find container \"d8f8ed87db540ca845520f6e39eddd55b951f9aa857ead35a4b5fddc7752e19f\": container with ID starting with d8f8ed87db540ca845520f6e39eddd55b951f9aa857ead35a4b5fddc7752e19f not found: ID does not exist" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.404719 4756 scope.go:117] "RemoveContainer" containerID="14e659a0837d7e6de24ac44a0327ad9d4b6349190180aaaf7261695ea52471cd" Dec 03 11:12:55 crc kubenswrapper[4756]: E1203 11:12:54.406274 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14e659a0837d7e6de24ac44a0327ad9d4b6349190180aaaf7261695ea52471cd\": container with ID starting with 14e659a0837d7e6de24ac44a0327ad9d4b6349190180aaaf7261695ea52471cd not found: ID does not exist" containerID="14e659a0837d7e6de24ac44a0327ad9d4b6349190180aaaf7261695ea52471cd" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.406295 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14e659a0837d7e6de24ac44a0327ad9d4b6349190180aaaf7261695ea52471cd"} err="failed to get container status \"14e659a0837d7e6de24ac44a0327ad9d4b6349190180aaaf7261695ea52471cd\": rpc error: code = NotFound desc = could not find container \"14e659a0837d7e6de24ac44a0327ad9d4b6349190180aaaf7261695ea52471cd\": container with ID starting with 14e659a0837d7e6de24ac44a0327ad9d4b6349190180aaaf7261695ea52471cd not found: ID does not exist" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.420695 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-fq2jx"] Dec 03 11:12:55 crc kubenswrapper[4756]: E1203 11:12:54.421227 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a512a68d-1e73-40eb-a69b-501880734832" containerName="dnsmasq-dns" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.421242 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a512a68d-1e73-40eb-a69b-501880734832" containerName="dnsmasq-dns" Dec 03 11:12:55 crc kubenswrapper[4756]: E1203 11:12:54.421270 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a512a68d-1e73-40eb-a69b-501880734832" containerName="init" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.421276 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a512a68d-1e73-40eb-a69b-501880734832" containerName="init" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.421452 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a512a68d-1e73-40eb-a69b-501880734832" containerName="dnsmasq-dns" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.422284 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.424224 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.424555 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.425112 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.444888 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-fq2jx"] Dec 03 11:12:55 crc kubenswrapper[4756]: E1203 11:12:54.445717 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-w4zqw ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-w4zqw ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-fq2jx" podUID="25a8c1fa-528d-43ea-b274-a5e48c745f84" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.455669 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-zbzgq"] Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.457019 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.476340 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-fq2jx"] Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.489454 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zbzgq"] Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555125 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-swiftconf\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555314 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-ring-data-devices\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555427 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-etc-swift\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555465 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4zqw\" (UniqueName: \"kubernetes.io/projected/25a8c1fa-528d-43ea-b274-a5e48c745f84-kube-api-access-w4zqw\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555509 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-combined-ca-bundle\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555588 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-swiftconf\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555628 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2b9\" (UniqueName: \"kubernetes.io/projected/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-kube-api-access-ms2b9\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555679 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-combined-ca-bundle\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555705 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-dispersionconf\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555736 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-dispersionconf\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555815 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25a8c1fa-528d-43ea-b274-a5e48c745f84-scripts\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555834 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25a8c1fa-528d-43ea-b274-a5e48c745f84-etc-swift\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555884 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25a8c1fa-528d-43ea-b274-a5e48c745f84-ring-data-devices\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555917 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-scripts\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.555974 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:55 crc kubenswrapper[4756]: E1203 11:12:54.556165 4756 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 11:12:55 crc kubenswrapper[4756]: E1203 11:12:54.556184 4756 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 11:12:55 crc kubenswrapper[4756]: E1203 11:12:54.556279 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift podName:99507e0d-929b-4d13-b820-5fd2869d776e nodeName:}" failed. No retries permitted until 2025-12-03 11:12:55.556259039 +0000 UTC m=+1186.586260273 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift") pod "swift-storage-0" (UID: "99507e0d-929b-4d13-b820-5fd2869d776e") : configmap "swift-ring-files" not found Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658178 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-etc-swift\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658246 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4zqw\" (UniqueName: \"kubernetes.io/projected/25a8c1fa-528d-43ea-b274-a5e48c745f84-kube-api-access-w4zqw\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658283 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-combined-ca-bundle\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658325 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-swiftconf\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658362 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms2b9\" (UniqueName: \"kubernetes.io/projected/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-kube-api-access-ms2b9\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658400 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-combined-ca-bundle\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658433 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-dispersionconf\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658474 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-dispersionconf\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658570 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25a8c1fa-528d-43ea-b274-a5e48c745f84-scripts\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658597 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25a8c1fa-528d-43ea-b274-a5e48c745f84-etc-swift\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658641 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25a8c1fa-528d-43ea-b274-a5e48c745f84-ring-data-devices\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658685 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-scripts\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658741 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-swiftconf\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658767 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-ring-data-devices\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.658935 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-etc-swift\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.659445 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25a8c1fa-528d-43ea-b274-a5e48c745f84-etc-swift\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.660112 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-ring-data-devices\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.660238 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25a8c1fa-528d-43ea-b274-a5e48c745f84-ring-data-devices\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.660260 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-scripts\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.660245 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25a8c1fa-528d-43ea-b274-a5e48c745f84-scripts\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.664070 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-dispersionconf\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.664379 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-combined-ca-bundle\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.666436 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-swiftconf\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.666594 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-swiftconf\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.667632 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-combined-ca-bundle\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.668047 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-dispersionconf\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.707584 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms2b9\" (UniqueName: \"kubernetes.io/projected/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-kube-api-access-ms2b9\") pod \"swift-ring-rebalance-zbzgq\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.714694 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4zqw\" (UniqueName: \"kubernetes.io/projected/25a8c1fa-528d-43ea-b274-a5e48c745f84-kube-api-access-w4zqw\") pod \"swift-ring-rebalance-fq2jx\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:54.775482 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.260845 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a512a68d-1e73-40eb-a69b-501880734832" path="/var/lib/kubelet/pods/a512a68d-1e73-40eb-a69b-501880734832/volumes" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.355738 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.371876 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.472555 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-dispersionconf\") pod \"25a8c1fa-528d-43ea-b274-a5e48c745f84\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.473063 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25a8c1fa-528d-43ea-b274-a5e48c745f84-scripts\") pod \"25a8c1fa-528d-43ea-b274-a5e48c745f84\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.473139 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25a8c1fa-528d-43ea-b274-a5e48c745f84-etc-swift\") pod \"25a8c1fa-528d-43ea-b274-a5e48c745f84\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.473173 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-swiftconf\") pod \"25a8c1fa-528d-43ea-b274-a5e48c745f84\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.473196 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-combined-ca-bundle\") pod \"25a8c1fa-528d-43ea-b274-a5e48c745f84\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.473317 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4zqw\" (UniqueName: \"kubernetes.io/projected/25a8c1fa-528d-43ea-b274-a5e48c745f84-kube-api-access-w4zqw\") pod \"25a8c1fa-528d-43ea-b274-a5e48c745f84\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.473341 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25a8c1fa-528d-43ea-b274-a5e48c745f84-ring-data-devices\") pod \"25a8c1fa-528d-43ea-b274-a5e48c745f84\" (UID: \"25a8c1fa-528d-43ea-b274-a5e48c745f84\") " Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.474877 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a8c1fa-528d-43ea-b274-a5e48c745f84-scripts" (OuterVolumeSpecName: "scripts") pod "25a8c1fa-528d-43ea-b274-a5e48c745f84" (UID: "25a8c1fa-528d-43ea-b274-a5e48c745f84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.475190 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a8c1fa-528d-43ea-b274-a5e48c745f84-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "25a8c1fa-528d-43ea-b274-a5e48c745f84" (UID: "25a8c1fa-528d-43ea-b274-a5e48c745f84"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.475193 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25a8c1fa-528d-43ea-b274-a5e48c745f84-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "25a8c1fa-528d-43ea-b274-a5e48c745f84" (UID: "25a8c1fa-528d-43ea-b274-a5e48c745f84"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.481746 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25a8c1fa-528d-43ea-b274-a5e48c745f84" (UID: "25a8c1fa-528d-43ea-b274-a5e48c745f84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.481769 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "25a8c1fa-528d-43ea-b274-a5e48c745f84" (UID: "25a8c1fa-528d-43ea-b274-a5e48c745f84"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.481845 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "25a8c1fa-528d-43ea-b274-a5e48c745f84" (UID: "25a8c1fa-528d-43ea-b274-a5e48c745f84"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.494398 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a8c1fa-528d-43ea-b274-a5e48c745f84-kube-api-access-w4zqw" (OuterVolumeSpecName: "kube-api-access-w4zqw") pod "25a8c1fa-528d-43ea-b274-a5e48c745f84" (UID: "25a8c1fa-528d-43ea-b274-a5e48c745f84"). InnerVolumeSpecName "kube-api-access-w4zqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.576007 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.576527 4756 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.576605 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25a8c1fa-528d-43ea-b274-a5e48c745f84-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.576724 4756 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25a8c1fa-528d-43ea-b274-a5e48c745f84-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.576815 4756 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.576883 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25a8c1fa-528d-43ea-b274-a5e48c745f84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.576992 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4zqw\" (UniqueName: \"kubernetes.io/projected/25a8c1fa-528d-43ea-b274-a5e48c745f84-kube-api-access-w4zqw\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.577096 4756 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25a8c1fa-528d-43ea-b274-a5e48c745f84-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 03 11:12:55 crc kubenswrapper[4756]: E1203 11:12:55.577000 4756 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 11:12:55 crc kubenswrapper[4756]: E1203 11:12:55.577257 4756 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 11:12:55 crc kubenswrapper[4756]: E1203 11:12:55.577400 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift podName:99507e0d-929b-4d13-b820-5fd2869d776e nodeName:}" failed. No retries permitted until 2025-12-03 11:12:57.577373222 +0000 UTC m=+1188.607374466 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift") pod "swift-storage-0" (UID: "99507e0d-929b-4d13-b820-5fd2869d776e") : configmap "swift-ring-files" not found Dec 03 11:12:55 crc kubenswrapper[4756]: I1203 11:12:55.662870 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zbzgq"] Dec 03 11:12:55 crc kubenswrapper[4756]: W1203 11:12:55.663166 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cfbb2e2_672e_48fb_8916_ccb83e962bf3.slice/crio-b5565c746b0cac9dc768800250c351c427d3a89a57f4210d9473fa295db3c458 WatchSource:0}: Error finding container b5565c746b0cac9dc768800250c351c427d3a89a57f4210d9473fa295db3c458: Status 404 returned error can't find the container with id b5565c746b0cac9dc768800250c351c427d3a89a57f4210d9473fa295db3c458 Dec 03 11:12:56 crc kubenswrapper[4756]: I1203 11:12:56.365757 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zbzgq" event={"ID":"5cfbb2e2-672e-48fb-8916-ccb83e962bf3","Type":"ContainerStarted","Data":"b5565c746b0cac9dc768800250c351c427d3a89a57f4210d9473fa295db3c458"} Dec 03 11:12:56 crc kubenswrapper[4756]: I1203 11:12:56.369656 4756 generic.go:334] "Generic (PLEG): container finished" podID="40e70be6-c17e-4601-a018-1a708bb91d13" containerID="4cd748b85a8c1d94491c3ac72c8133be1479faee175ecb994eb2131ba982ba37" exitCode=0 Dec 03 11:12:56 crc kubenswrapper[4756]: I1203 11:12:56.369730 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ggh4n" event={"ID":"40e70be6-c17e-4601-a018-1a708bb91d13","Type":"ContainerDied","Data":"4cd748b85a8c1d94491c3ac72c8133be1479faee175ecb994eb2131ba982ba37"} Dec 03 11:12:56 crc kubenswrapper[4756]: I1203 11:12:56.373401 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fq2jx" Dec 03 11:12:56 crc kubenswrapper[4756]: I1203 11:12:56.373399 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"e0355b17b2ab1c82e928552e43898507356fa2e99840efcf360f3485a28faa26"} Dec 03 11:12:56 crc kubenswrapper[4756]: I1203 11:12:56.514132 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-fq2jx"] Dec 03 11:12:56 crc kubenswrapper[4756]: I1203 11:12:56.520428 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-fq2jx"] Dec 03 11:12:56 crc kubenswrapper[4756]: E1203 11:12:56.548927 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25a8c1fa_528d_43ea_b274_a5e48c745f84.slice\": RecentStats: unable to find data in memory cache]" Dec 03 11:12:57 crc kubenswrapper[4756]: I1203 11:12:57.249597 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25a8c1fa-528d-43ea-b274-a5e48c745f84" path="/var/lib/kubelet/pods/25a8c1fa-528d-43ea-b274-a5e48c745f84/volumes" Dec 03 11:12:57 crc kubenswrapper[4756]: I1203 11:12:57.389236 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ggh4n" event={"ID":"40e70be6-c17e-4601-a018-1a708bb91d13","Type":"ContainerStarted","Data":"ccbb4970afbf6abab6fc067eefefaa1e937614acd1e5e3d3f51aecfdefa79187"} Dec 03 11:12:57 crc kubenswrapper[4756]: I1203 11:12:57.421748 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-ggh4n" podStartSLOduration=5.421721344 podStartE2EDuration="5.421721344s" podCreationTimestamp="2025-12-03 11:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:12:57.412064292 +0000 UTC m=+1188.442065536" watchObservedRunningTime="2025-12-03 11:12:57.421721344 +0000 UTC m=+1188.451722578" Dec 03 11:12:57 crc kubenswrapper[4756]: I1203 11:12:57.618646 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:12:57 crc kubenswrapper[4756]: E1203 11:12:57.618945 4756 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 11:12:57 crc kubenswrapper[4756]: E1203 11:12:57.618991 4756 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 11:12:57 crc kubenswrapper[4756]: E1203 11:12:57.619071 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift podName:99507e0d-929b-4d13-b820-5fd2869d776e nodeName:}" failed. No retries permitted until 2025-12-03 11:13:01.619044955 +0000 UTC m=+1192.649046199 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift") pod "swift-storage-0" (UID: "99507e0d-929b-4d13-b820-5fd2869d776e") : configmap "swift-ring-files" not found Dec 03 11:12:58 crc kubenswrapper[4756]: I1203 11:12:58.067660 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:12:58 crc kubenswrapper[4756]: I1203 11:12:58.680053 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 03 11:12:58 crc kubenswrapper[4756]: I1203 11:12:58.680474 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 03 11:13:01 crc kubenswrapper[4756]: I1203 11:13:01.702799 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:13:01 crc kubenswrapper[4756]: E1203 11:13:01.703111 4756 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 11:13:01 crc kubenswrapper[4756]: E1203 11:13:01.703527 4756 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 11:13:01 crc kubenswrapper[4756]: E1203 11:13:01.703605 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift podName:99507e0d-929b-4d13-b820-5fd2869d776e nodeName:}" failed. No retries permitted until 2025-12-03 11:13:09.703581478 +0000 UTC m=+1200.733582722 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift") pod "swift-storage-0" (UID: "99507e0d-929b-4d13-b820-5fd2869d776e") : configmap "swift-ring-files" not found Dec 03 11:13:03 crc kubenswrapper[4756]: I1203 11:13:03.050241 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 03 11:13:03 crc kubenswrapper[4756]: I1203 11:13:03.069278 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:13:03 crc kubenswrapper[4756]: I1203 11:13:03.163989 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-n57vc"] Dec 03 11:13:03 crc kubenswrapper[4756]: I1203 11:13:03.164300 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" podUID="44e97327-85df-405e-afa1-46b2105ccf65" containerName="dnsmasq-dns" containerID="cri-o://e4d48de48733272bc0d4f2bcc48df70010dc4cfe449424dcaa30768c70fae073" gracePeriod=10 Dec 03 11:13:03 crc kubenswrapper[4756]: I1203 11:13:03.940909 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 03 11:13:03 crc kubenswrapper[4756]: I1203 11:13:03.961437 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.053921 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.072703 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph8j2\" (UniqueName: \"kubernetes.io/projected/44e97327-85df-405e-afa1-46b2105ccf65-kube-api-access-ph8j2\") pod \"44e97327-85df-405e-afa1-46b2105ccf65\" (UID: \"44e97327-85df-405e-afa1-46b2105ccf65\") " Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.072829 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44e97327-85df-405e-afa1-46b2105ccf65-config\") pod \"44e97327-85df-405e-afa1-46b2105ccf65\" (UID: \"44e97327-85df-405e-afa1-46b2105ccf65\") " Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.072918 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44e97327-85df-405e-afa1-46b2105ccf65-dns-svc\") pod \"44e97327-85df-405e-afa1-46b2105ccf65\" (UID: \"44e97327-85df-405e-afa1-46b2105ccf65\") " Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.078264 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e97327-85df-405e-afa1-46b2105ccf65-kube-api-access-ph8j2" (OuterVolumeSpecName: "kube-api-access-ph8j2") pod "44e97327-85df-405e-afa1-46b2105ccf65" (UID: "44e97327-85df-405e-afa1-46b2105ccf65"). InnerVolumeSpecName "kube-api-access-ph8j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.116769 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44e97327-85df-405e-afa1-46b2105ccf65-config" (OuterVolumeSpecName: "config") pod "44e97327-85df-405e-afa1-46b2105ccf65" (UID: "44e97327-85df-405e-afa1-46b2105ccf65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.128676 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44e97327-85df-405e-afa1-46b2105ccf65-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44e97327-85df-405e-afa1-46b2105ccf65" (UID: "44e97327-85df-405e-afa1-46b2105ccf65"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.178826 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph8j2\" (UniqueName: \"kubernetes.io/projected/44e97327-85df-405e-afa1-46b2105ccf65-kube-api-access-ph8j2\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.178890 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44e97327-85df-405e-afa1-46b2105ccf65-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.178903 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44e97327-85df-405e-afa1-46b2105ccf65-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.452222 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zbzgq" event={"ID":"5cfbb2e2-672e-48fb-8916-ccb83e962bf3","Type":"ContainerStarted","Data":"aa641d82e2d87d37fdc33e5193dabda71288bd2920ab8a95dfd5a9374e40c838"} Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.454695 4756 generic.go:334] "Generic (PLEG): container finished" podID="44e97327-85df-405e-afa1-46b2105ccf65" containerID="e4d48de48733272bc0d4f2bcc48df70010dc4cfe449424dcaa30768c70fae073" exitCode=0 Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.455518 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.455869 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" event={"ID":"44e97327-85df-405e-afa1-46b2105ccf65","Type":"ContainerDied","Data":"e4d48de48733272bc0d4f2bcc48df70010dc4cfe449424dcaa30768c70fae073"} Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.455923 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-n57vc" event={"ID":"44e97327-85df-405e-afa1-46b2105ccf65","Type":"ContainerDied","Data":"39338d555909977ffa20f3eae526859444a30233fb0270a2b0214ba6aaf84349"} Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.456064 4756 scope.go:117] "RemoveContainer" containerID="e4d48de48733272bc0d4f2bcc48df70010dc4cfe449424dcaa30768c70fae073" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.482367 4756 scope.go:117] "RemoveContainer" containerID="f4a86505463837d37a1b1edc54b6b3fc62f5cd6b8c877ddfe2ebac539afe7e95" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.512274 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-zbzgq" podStartSLOduration=2.6015918940000002 podStartE2EDuration="10.512238282s" podCreationTimestamp="2025-12-03 11:12:54 +0000 UTC" firstStartedPulling="2025-12-03 11:12:55.666098969 +0000 UTC m=+1186.696100203" lastFinishedPulling="2025-12-03 11:13:03.576745347 +0000 UTC m=+1194.606746591" observedRunningTime="2025-12-03 11:13:04.492056243 +0000 UTC m=+1195.522057507" watchObservedRunningTime="2025-12-03 11:13:04.512238282 +0000 UTC m=+1195.542239526" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.517253 4756 scope.go:117] "RemoveContainer" containerID="e4d48de48733272bc0d4f2bcc48df70010dc4cfe449424dcaa30768c70fae073" Dec 03 11:13:04 crc kubenswrapper[4756]: E1203 11:13:04.517908 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4d48de48733272bc0d4f2bcc48df70010dc4cfe449424dcaa30768c70fae073\": container with ID starting with e4d48de48733272bc0d4f2bcc48df70010dc4cfe449424dcaa30768c70fae073 not found: ID does not exist" containerID="e4d48de48733272bc0d4f2bcc48df70010dc4cfe449424dcaa30768c70fae073" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.517944 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d48de48733272bc0d4f2bcc48df70010dc4cfe449424dcaa30768c70fae073"} err="failed to get container status \"e4d48de48733272bc0d4f2bcc48df70010dc4cfe449424dcaa30768c70fae073\": rpc error: code = NotFound desc = could not find container \"e4d48de48733272bc0d4f2bcc48df70010dc4cfe449424dcaa30768c70fae073\": container with ID starting with e4d48de48733272bc0d4f2bcc48df70010dc4cfe449424dcaa30768c70fae073 not found: ID does not exist" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.517984 4756 scope.go:117] "RemoveContainer" containerID="f4a86505463837d37a1b1edc54b6b3fc62f5cd6b8c877ddfe2ebac539afe7e95" Dec 03 11:13:04 crc kubenswrapper[4756]: E1203 11:13:04.518542 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4a86505463837d37a1b1edc54b6b3fc62f5cd6b8c877ddfe2ebac539afe7e95\": container with ID starting with f4a86505463837d37a1b1edc54b6b3fc62f5cd6b8c877ddfe2ebac539afe7e95 not found: ID does not exist" containerID="f4a86505463837d37a1b1edc54b6b3fc62f5cd6b8c877ddfe2ebac539afe7e95" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.518569 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a86505463837d37a1b1edc54b6b3fc62f5cd6b8c877ddfe2ebac539afe7e95"} err="failed to get container status \"f4a86505463837d37a1b1edc54b6b3fc62f5cd6b8c877ddfe2ebac539afe7e95\": rpc error: code = NotFound desc = could not find container \"f4a86505463837d37a1b1edc54b6b3fc62f5cd6b8c877ddfe2ebac539afe7e95\": container with ID starting with f4a86505463837d37a1b1edc54b6b3fc62f5cd6b8c877ddfe2ebac539afe7e95 not found: ID does not exist" Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.535584 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-n57vc"] Dec 03 11:13:04 crc kubenswrapper[4756]: I1203 11:13:04.552274 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-n57vc"] Dec 03 11:13:05 crc kubenswrapper[4756]: I1203 11:13:05.246059 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e97327-85df-405e-afa1-46b2105ccf65" path="/var/lib/kubelet/pods/44e97327-85df-405e-afa1-46b2105ccf65/volumes" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.065222 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-kl6wn"] Dec 03 11:13:06 crc kubenswrapper[4756]: E1203 11:13:06.065887 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e97327-85df-405e-afa1-46b2105ccf65" containerName="init" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.065916 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e97327-85df-405e-afa1-46b2105ccf65" containerName="init" Dec 03 11:13:06 crc kubenswrapper[4756]: E1203 11:13:06.065935 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e97327-85df-405e-afa1-46b2105ccf65" containerName="dnsmasq-dns" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.065943 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e97327-85df-405e-afa1-46b2105ccf65" containerName="dnsmasq-dns" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.066245 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e97327-85df-405e-afa1-46b2105ccf65" containerName="dnsmasq-dns" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.067222 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kl6wn" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.073715 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kl6wn"] Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.089097 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a6da-account-create-update-5nsn7"] Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.090560 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6da-account-create-update-5nsn7" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.093732 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.110899 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a6da-account-create-update-5nsn7"] Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.129448 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535da1f9-45e8-4189-9e84-0c5866c0d612-operator-scripts\") pod \"glance-db-create-kl6wn\" (UID: \"535da1f9-45e8-4189-9e84-0c5866c0d612\") " pod="openstack/glance-db-create-kl6wn" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.129529 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd4vt\" (UniqueName: \"kubernetes.io/projected/535da1f9-45e8-4189-9e84-0c5866c0d612-kube-api-access-hd4vt\") pod \"glance-db-create-kl6wn\" (UID: \"535da1f9-45e8-4189-9e84-0c5866c0d612\") " pod="openstack/glance-db-create-kl6wn" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.231348 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535da1f9-45e8-4189-9e84-0c5866c0d612-operator-scripts\") pod \"glance-db-create-kl6wn\" (UID: \"535da1f9-45e8-4189-9e84-0c5866c0d612\") " pod="openstack/glance-db-create-kl6wn" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.231410 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd4vt\" (UniqueName: \"kubernetes.io/projected/535da1f9-45e8-4189-9e84-0c5866c0d612-kube-api-access-hd4vt\") pod \"glance-db-create-kl6wn\" (UID: \"535da1f9-45e8-4189-9e84-0c5866c0d612\") " pod="openstack/glance-db-create-kl6wn" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.231448 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2596\" (UniqueName: \"kubernetes.io/projected/6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f-kube-api-access-d2596\") pod \"glance-a6da-account-create-update-5nsn7\" (UID: \"6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f\") " pod="openstack/glance-a6da-account-create-update-5nsn7" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.231476 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f-operator-scripts\") pod \"glance-a6da-account-create-update-5nsn7\" (UID: \"6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f\") " pod="openstack/glance-a6da-account-create-update-5nsn7" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.232570 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535da1f9-45e8-4189-9e84-0c5866c0d612-operator-scripts\") pod \"glance-db-create-kl6wn\" (UID: \"535da1f9-45e8-4189-9e84-0c5866c0d612\") " pod="openstack/glance-db-create-kl6wn" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.254580 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd4vt\" (UniqueName: \"kubernetes.io/projected/535da1f9-45e8-4189-9e84-0c5866c0d612-kube-api-access-hd4vt\") pod \"glance-db-create-kl6wn\" (UID: \"535da1f9-45e8-4189-9e84-0c5866c0d612\") " pod="openstack/glance-db-create-kl6wn" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.333496 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2596\" (UniqueName: \"kubernetes.io/projected/6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f-kube-api-access-d2596\") pod \"glance-a6da-account-create-update-5nsn7\" (UID: \"6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f\") " pod="openstack/glance-a6da-account-create-update-5nsn7" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.335043 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f-operator-scripts\") pod \"glance-a6da-account-create-update-5nsn7\" (UID: \"6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f\") " pod="openstack/glance-a6da-account-create-update-5nsn7" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.335132 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f-operator-scripts\") pod \"glance-a6da-account-create-update-5nsn7\" (UID: \"6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f\") " pod="openstack/glance-a6da-account-create-update-5nsn7" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.352454 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2596\" (UniqueName: \"kubernetes.io/projected/6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f-kube-api-access-d2596\") pod \"glance-a6da-account-create-update-5nsn7\" (UID: \"6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f\") " pod="openstack/glance-a6da-account-create-update-5nsn7" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.405141 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kl6wn" Dec 03 11:13:06 crc kubenswrapper[4756]: I1203 11:13:06.419278 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6da-account-create-update-5nsn7" Dec 03 11:13:07 crc kubenswrapper[4756]: I1203 11:13:07.275929 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a6da-account-create-update-5nsn7"] Dec 03 11:13:07 crc kubenswrapper[4756]: I1203 11:13:07.354665 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kl6wn"] Dec 03 11:13:07 crc kubenswrapper[4756]: I1203 11:13:07.483239 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6da-account-create-update-5nsn7" event={"ID":"6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f","Type":"ContainerStarted","Data":"0fda4b87ba5d950d517124b474a281f372841ef85d041dc23741f4131e28ed54"} Dec 03 11:13:07 crc kubenswrapper[4756]: I1203 11:13:07.485187 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kl6wn" event={"ID":"535da1f9-45e8-4189-9e84-0c5866c0d612","Type":"ContainerStarted","Data":"b93b0baea7c1fefda80ce9e3bdb861fb48d010597fcfc0a1660ee0ae74dcb697"} Dec 03 11:13:08 crc kubenswrapper[4756]: I1203 11:13:08.495544 4756 generic.go:334] "Generic (PLEG): container finished" podID="6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f" containerID="86e3b5ec1ea179c81c0e2ccfb3cc89a3e91e4201e9d3428062281c7db20c0cba" exitCode=0 Dec 03 11:13:08 crc kubenswrapper[4756]: I1203 11:13:08.495626 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6da-account-create-update-5nsn7" event={"ID":"6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f","Type":"ContainerDied","Data":"86e3b5ec1ea179c81c0e2ccfb3cc89a3e91e4201e9d3428062281c7db20c0cba"} Dec 03 11:13:08 crc kubenswrapper[4756]: I1203 11:13:08.497865 4756 generic.go:334] "Generic (PLEG): container finished" podID="535da1f9-45e8-4189-9e84-0c5866c0d612" containerID="b585e9d659ec765c93fb596e3fc6509f8d7187e70906f16bd474177bdfcd25e0" exitCode=0 Dec 03 11:13:08 crc kubenswrapper[4756]: I1203 11:13:08.497915 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kl6wn" event={"ID":"535da1f9-45e8-4189-9e84-0c5866c0d612","Type":"ContainerDied","Data":"b585e9d659ec765c93fb596e3fc6509f8d7187e70906f16bd474177bdfcd25e0"} Dec 03 11:13:09 crc kubenswrapper[4756]: I1203 11:13:09.797544 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:13:09 crc kubenswrapper[4756]: E1203 11:13:09.797755 4756 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 11:13:09 crc kubenswrapper[4756]: E1203 11:13:09.798057 4756 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 11:13:09 crc kubenswrapper[4756]: E1203 11:13:09.798126 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift podName:99507e0d-929b-4d13-b820-5fd2869d776e nodeName:}" failed. No retries permitted until 2025-12-03 11:13:25.798104308 +0000 UTC m=+1216.828105552 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift") pod "swift-storage-0" (UID: "99507e0d-929b-4d13-b820-5fd2869d776e") : configmap "swift-ring-files" not found Dec 03 11:13:09 crc kubenswrapper[4756]: I1203 11:13:09.954453 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kl6wn" Dec 03 11:13:09 crc kubenswrapper[4756]: I1203 11:13:09.964708 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6da-account-create-update-5nsn7" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.102542 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f-operator-scripts\") pod \"6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f\" (UID: \"6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f\") " Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.102686 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535da1f9-45e8-4189-9e84-0c5866c0d612-operator-scripts\") pod \"535da1f9-45e8-4189-9e84-0c5866c0d612\" (UID: \"535da1f9-45e8-4189-9e84-0c5866c0d612\") " Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.102860 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd4vt\" (UniqueName: \"kubernetes.io/projected/535da1f9-45e8-4189-9e84-0c5866c0d612-kube-api-access-hd4vt\") pod \"535da1f9-45e8-4189-9e84-0c5866c0d612\" (UID: \"535da1f9-45e8-4189-9e84-0c5866c0d612\") " Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.102917 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2596\" (UniqueName: \"kubernetes.io/projected/6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f-kube-api-access-d2596\") pod \"6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f\" (UID: \"6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f\") " Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.103724 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/535da1f9-45e8-4189-9e84-0c5866c0d612-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "535da1f9-45e8-4189-9e84-0c5866c0d612" (UID: "535da1f9-45e8-4189-9e84-0c5866c0d612"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.103937 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f" (UID: "6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.110277 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/535da1f9-45e8-4189-9e84-0c5866c0d612-kube-api-access-hd4vt" (OuterVolumeSpecName: "kube-api-access-hd4vt") pod "535da1f9-45e8-4189-9e84-0c5866c0d612" (UID: "535da1f9-45e8-4189-9e84-0c5866c0d612"). InnerVolumeSpecName "kube-api-access-hd4vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.111296 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f-kube-api-access-d2596" (OuterVolumeSpecName: "kube-api-access-d2596") pod "6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f" (UID: "6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f"). InnerVolumeSpecName "kube-api-access-d2596". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.146747 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-srkv7"] Dec 03 11:13:10 crc kubenswrapper[4756]: E1203 11:13:10.147208 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535da1f9-45e8-4189-9e84-0c5866c0d612" containerName="mariadb-database-create" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.147224 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="535da1f9-45e8-4189-9e84-0c5866c0d612" containerName="mariadb-database-create" Dec 03 11:13:10 crc kubenswrapper[4756]: E1203 11:13:10.147245 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f" containerName="mariadb-account-create-update" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.147252 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f" containerName="mariadb-account-create-update" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.147429 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="535da1f9-45e8-4189-9e84-0c5866c0d612" containerName="mariadb-database-create" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.147447 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f" containerName="mariadb-account-create-update" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.148213 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-srkv7" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.171117 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-srkv7"] Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.204629 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95cvz\" (UniqueName: \"kubernetes.io/projected/44226660-d2e9-4c13-bb9c-e2113bc0fcec-kube-api-access-95cvz\") pod \"keystone-db-create-srkv7\" (UID: \"44226660-d2e9-4c13-bb9c-e2113bc0fcec\") " pod="openstack/keystone-db-create-srkv7" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.204829 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44226660-d2e9-4c13-bb9c-e2113bc0fcec-operator-scripts\") pod \"keystone-db-create-srkv7\" (UID: \"44226660-d2e9-4c13-bb9c-e2113bc0fcec\") " pod="openstack/keystone-db-create-srkv7" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.205002 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.205032 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535da1f9-45e8-4189-9e84-0c5866c0d612-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.205043 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd4vt\" (UniqueName: \"kubernetes.io/projected/535da1f9-45e8-4189-9e84-0c5866c0d612-kube-api-access-hd4vt\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.205056 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2596\" (UniqueName: \"kubernetes.io/projected/6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f-kube-api-access-d2596\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.307286 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44226660-d2e9-4c13-bb9c-e2113bc0fcec-operator-scripts\") pod \"keystone-db-create-srkv7\" (UID: \"44226660-d2e9-4c13-bb9c-e2113bc0fcec\") " pod="openstack/keystone-db-create-srkv7" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.307439 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95cvz\" (UniqueName: \"kubernetes.io/projected/44226660-d2e9-4c13-bb9c-e2113bc0fcec-kube-api-access-95cvz\") pod \"keystone-db-create-srkv7\" (UID: \"44226660-d2e9-4c13-bb9c-e2113bc0fcec\") " pod="openstack/keystone-db-create-srkv7" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.309139 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44226660-d2e9-4c13-bb9c-e2113bc0fcec-operator-scripts\") pod \"keystone-db-create-srkv7\" (UID: \"44226660-d2e9-4c13-bb9c-e2113bc0fcec\") " pod="openstack/keystone-db-create-srkv7" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.327931 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95cvz\" (UniqueName: \"kubernetes.io/projected/44226660-d2e9-4c13-bb9c-e2113bc0fcec-kube-api-access-95cvz\") pod \"keystone-db-create-srkv7\" (UID: \"44226660-d2e9-4c13-bb9c-e2113bc0fcec\") " pod="openstack/keystone-db-create-srkv7" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.427040 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a266-account-create-update-x49nv"] Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.428647 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a266-account-create-update-x49nv" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.434431 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.460883 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a266-account-create-update-x49nv"] Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.514375 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-srkv7" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.517560 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6da-account-create-update-5nsn7" event={"ID":"6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f","Type":"ContainerDied","Data":"0fda4b87ba5d950d517124b474a281f372841ef85d041dc23741f4131e28ed54"} Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.517616 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fda4b87ba5d950d517124b474a281f372841ef85d041dc23741f4131e28ed54" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.517729 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6da-account-create-update-5nsn7" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.526019 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kl6wn" event={"ID":"535da1f9-45e8-4189-9e84-0c5866c0d612","Type":"ContainerDied","Data":"b93b0baea7c1fefda80ce9e3bdb861fb48d010597fcfc0a1660ee0ae74dcb697"} Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.526180 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b93b0baea7c1fefda80ce9e3bdb861fb48d010597fcfc0a1660ee0ae74dcb697" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.526291 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kl6wn" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.615984 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e512a844-d4d9-4bc3-88bb-a0f11fb31821-operator-scripts\") pod \"keystone-a266-account-create-update-x49nv\" (UID: \"e512a844-d4d9-4bc3-88bb-a0f11fb31821\") " pod="openstack/keystone-a266-account-create-update-x49nv" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.616499 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tdqk\" (UniqueName: \"kubernetes.io/projected/e512a844-d4d9-4bc3-88bb-a0f11fb31821-kube-api-access-8tdqk\") pod \"keystone-a266-account-create-update-x49nv\" (UID: \"e512a844-d4d9-4bc3-88bb-a0f11fb31821\") " pod="openstack/keystone-a266-account-create-update-x49nv" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.652055 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-tjhsk"] Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.653396 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tjhsk" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.678219 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-tjhsk"] Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.717626 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n46sq\" (UniqueName: \"kubernetes.io/projected/fcc6158c-bc38-42b6-9fbc-2bd15e907f48-kube-api-access-n46sq\") pod \"placement-db-create-tjhsk\" (UID: \"fcc6158c-bc38-42b6-9fbc-2bd15e907f48\") " pod="openstack/placement-db-create-tjhsk" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.717727 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tdqk\" (UniqueName: \"kubernetes.io/projected/e512a844-d4d9-4bc3-88bb-a0f11fb31821-kube-api-access-8tdqk\") pod \"keystone-a266-account-create-update-x49nv\" (UID: \"e512a844-d4d9-4bc3-88bb-a0f11fb31821\") " pod="openstack/keystone-a266-account-create-update-x49nv" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.717757 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e512a844-d4d9-4bc3-88bb-a0f11fb31821-operator-scripts\") pod \"keystone-a266-account-create-update-x49nv\" (UID: \"e512a844-d4d9-4bc3-88bb-a0f11fb31821\") " pod="openstack/keystone-a266-account-create-update-x49nv" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.717840 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcc6158c-bc38-42b6-9fbc-2bd15e907f48-operator-scripts\") pod \"placement-db-create-tjhsk\" (UID: \"fcc6158c-bc38-42b6-9fbc-2bd15e907f48\") " pod="openstack/placement-db-create-tjhsk" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.718940 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e512a844-d4d9-4bc3-88bb-a0f11fb31821-operator-scripts\") pod \"keystone-a266-account-create-update-x49nv\" (UID: \"e512a844-d4d9-4bc3-88bb-a0f11fb31821\") " pod="openstack/keystone-a266-account-create-update-x49nv" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.742323 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tdqk\" (UniqueName: \"kubernetes.io/projected/e512a844-d4d9-4bc3-88bb-a0f11fb31821-kube-api-access-8tdqk\") pod \"keystone-a266-account-create-update-x49nv\" (UID: \"e512a844-d4d9-4bc3-88bb-a0f11fb31821\") " pod="openstack/keystone-a266-account-create-update-x49nv" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.750946 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a266-account-create-update-x49nv" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.789384 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b30f-account-create-update-c926r"] Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.791404 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b30f-account-create-update-c926r" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.795974 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.801806 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b30f-account-create-update-c926r"] Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.820995 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxgbf\" (UniqueName: \"kubernetes.io/projected/aec32d41-0b95-4035-b092-aa00a4c57129-kube-api-access-kxgbf\") pod \"placement-b30f-account-create-update-c926r\" (UID: \"aec32d41-0b95-4035-b092-aa00a4c57129\") " pod="openstack/placement-b30f-account-create-update-c926r" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.821078 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec32d41-0b95-4035-b092-aa00a4c57129-operator-scripts\") pod \"placement-b30f-account-create-update-c926r\" (UID: \"aec32d41-0b95-4035-b092-aa00a4c57129\") " pod="openstack/placement-b30f-account-create-update-c926r" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.821128 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcc6158c-bc38-42b6-9fbc-2bd15e907f48-operator-scripts\") pod \"placement-db-create-tjhsk\" (UID: \"fcc6158c-bc38-42b6-9fbc-2bd15e907f48\") " pod="openstack/placement-db-create-tjhsk" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.821821 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n46sq\" (UniqueName: \"kubernetes.io/projected/fcc6158c-bc38-42b6-9fbc-2bd15e907f48-kube-api-access-n46sq\") pod \"placement-db-create-tjhsk\" (UID: \"fcc6158c-bc38-42b6-9fbc-2bd15e907f48\") " pod="openstack/placement-db-create-tjhsk" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.822646 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcc6158c-bc38-42b6-9fbc-2bd15e907f48-operator-scripts\") pod \"placement-db-create-tjhsk\" (UID: \"fcc6158c-bc38-42b6-9fbc-2bd15e907f48\") " pod="openstack/placement-db-create-tjhsk" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.842277 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n46sq\" (UniqueName: \"kubernetes.io/projected/fcc6158c-bc38-42b6-9fbc-2bd15e907f48-kube-api-access-n46sq\") pod \"placement-db-create-tjhsk\" (UID: \"fcc6158c-bc38-42b6-9fbc-2bd15e907f48\") " pod="openstack/placement-db-create-tjhsk" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.887890 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xlz9h" podUID="e033887b-a32e-4141-9812-455b70f85d39" containerName="ovn-controller" probeResult="failure" output=< Dec 03 11:13:10 crc kubenswrapper[4756]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 03 11:13:10 crc kubenswrapper[4756]: > Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.923814 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxgbf\" (UniqueName: \"kubernetes.io/projected/aec32d41-0b95-4035-b092-aa00a4c57129-kube-api-access-kxgbf\") pod \"placement-b30f-account-create-update-c926r\" (UID: \"aec32d41-0b95-4035-b092-aa00a4c57129\") " pod="openstack/placement-b30f-account-create-update-c926r" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.923876 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec32d41-0b95-4035-b092-aa00a4c57129-operator-scripts\") pod \"placement-b30f-account-create-update-c926r\" (UID: \"aec32d41-0b95-4035-b092-aa00a4c57129\") " pod="openstack/placement-b30f-account-create-update-c926r" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.925096 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec32d41-0b95-4035-b092-aa00a4c57129-operator-scripts\") pod \"placement-b30f-account-create-update-c926r\" (UID: \"aec32d41-0b95-4035-b092-aa00a4c57129\") " pod="openstack/placement-b30f-account-create-update-c926r" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.951379 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxgbf\" (UniqueName: \"kubernetes.io/projected/aec32d41-0b95-4035-b092-aa00a4c57129-kube-api-access-kxgbf\") pod \"placement-b30f-account-create-update-c926r\" (UID: \"aec32d41-0b95-4035-b092-aa00a4c57129\") " pod="openstack/placement-b30f-account-create-update-c926r" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.994538 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tjhsk" Dec 03 11:13:10 crc kubenswrapper[4756]: I1203 11:13:10.998974 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-srkv7"] Dec 03 11:13:11 crc kubenswrapper[4756]: W1203 11:13:11.019282 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44226660_d2e9_4c13_bb9c_e2113bc0fcec.slice/crio-86a218a268656e856cde4bc44781d8e31aa9813a05133e6139b626ade4c460c3 WatchSource:0}: Error finding container 86a218a268656e856cde4bc44781d8e31aa9813a05133e6139b626ade4c460c3: Status 404 returned error can't find the container with id 86a218a268656e856cde4bc44781d8e31aa9813a05133e6139b626ade4c460c3 Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.115188 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b30f-account-create-update-c926r" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.427448 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a266-account-create-update-x49nv"] Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.506237 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-7bvmm"] Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.507642 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7bvmm" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.513129 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.513666 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xp874" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.536074 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7bvmm"] Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.547791 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-tjhsk"] Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.596113 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-srkv7" event={"ID":"44226660-d2e9-4c13-bb9c-e2113bc0fcec","Type":"ContainerStarted","Data":"aa15435aa579c192c0725626f9ac8b92b6782c9128e0b3a0ce477ac15e2830cc"} Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.596234 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-srkv7" event={"ID":"44226660-d2e9-4c13-bb9c-e2113bc0fcec","Type":"ContainerStarted","Data":"86a218a268656e856cde4bc44781d8e31aa9813a05133e6139b626ade4c460c3"} Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.606273 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a266-account-create-update-x49nv" event={"ID":"e512a844-d4d9-4bc3-88bb-a0f11fb31821","Type":"ContainerStarted","Data":"984b83ba8f6c1d1f4875bfa7a5d0cdb1c21fce647716b099a91b51691158fa7b"} Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.630218 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-srkv7" podStartSLOduration=1.630193916 podStartE2EDuration="1.630193916s" podCreationTimestamp="2025-12-03 11:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:13:11.61877092 +0000 UTC m=+1202.648772164" watchObservedRunningTime="2025-12-03 11:13:11.630193916 +0000 UTC m=+1202.660195160" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.640989 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwszb\" (UniqueName: \"kubernetes.io/projected/07a714a5-c627-43b1-8bc1-85e157c25fb0-kube-api-access-fwszb\") pod \"glance-db-sync-7bvmm\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " pod="openstack/glance-db-sync-7bvmm" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.641075 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-combined-ca-bundle\") pod \"glance-db-sync-7bvmm\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " pod="openstack/glance-db-sync-7bvmm" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.641130 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-config-data\") pod \"glance-db-sync-7bvmm\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " pod="openstack/glance-db-sync-7bvmm" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.641211 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-db-sync-config-data\") pod \"glance-db-sync-7bvmm\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " pod="openstack/glance-db-sync-7bvmm" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.742941 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwszb\" (UniqueName: \"kubernetes.io/projected/07a714a5-c627-43b1-8bc1-85e157c25fb0-kube-api-access-fwszb\") pod \"glance-db-sync-7bvmm\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " pod="openstack/glance-db-sync-7bvmm" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.743135 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-combined-ca-bundle\") pod \"glance-db-sync-7bvmm\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " pod="openstack/glance-db-sync-7bvmm" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.743193 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-config-data\") pod \"glance-db-sync-7bvmm\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " pod="openstack/glance-db-sync-7bvmm" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.743292 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-db-sync-config-data\") pod \"glance-db-sync-7bvmm\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " pod="openstack/glance-db-sync-7bvmm" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.751240 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-combined-ca-bundle\") pod \"glance-db-sync-7bvmm\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " pod="openstack/glance-db-sync-7bvmm" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.752790 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-config-data\") pod \"glance-db-sync-7bvmm\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " pod="openstack/glance-db-sync-7bvmm" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.758264 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-db-sync-config-data\") pod \"glance-db-sync-7bvmm\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " pod="openstack/glance-db-sync-7bvmm" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.770775 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwszb\" (UniqueName: \"kubernetes.io/projected/07a714a5-c627-43b1-8bc1-85e157c25fb0-kube-api-access-fwszb\") pod \"glance-db-sync-7bvmm\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " pod="openstack/glance-db-sync-7bvmm" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.866465 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7bvmm" Dec 03 11:13:11 crc kubenswrapper[4756]: I1203 11:13:11.898168 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b30f-account-create-update-c926r"] Dec 03 11:13:11 crc kubenswrapper[4756]: W1203 11:13:11.913268 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaec32d41_0b95_4035_b092_aa00a4c57129.slice/crio-84e8743dddc39f9cf5d4c84b47f75ee42f2ee5464ab8c9878d053addc6122fc8 WatchSource:0}: Error finding container 84e8743dddc39f9cf5d4c84b47f75ee42f2ee5464ab8c9878d053addc6122fc8: Status 404 returned error can't find the container with id 84e8743dddc39f9cf5d4c84b47f75ee42f2ee5464ab8c9878d053addc6122fc8 Dec 03 11:13:12 crc kubenswrapper[4756]: I1203 11:13:12.430866 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7bvmm"] Dec 03 11:13:12 crc kubenswrapper[4756]: I1203 11:13:12.617384 4756 generic.go:334] "Generic (PLEG): container finished" podID="aec32d41-0b95-4035-b092-aa00a4c57129" containerID="23059670a483d42a68efbefa5cd077723e05d62dd46ee966723cc77ac3cd5283" exitCode=0 Dec 03 11:13:12 crc kubenswrapper[4756]: I1203 11:13:12.617434 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b30f-account-create-update-c926r" event={"ID":"aec32d41-0b95-4035-b092-aa00a4c57129","Type":"ContainerDied","Data":"23059670a483d42a68efbefa5cd077723e05d62dd46ee966723cc77ac3cd5283"} Dec 03 11:13:12 crc kubenswrapper[4756]: I1203 11:13:12.618178 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b30f-account-create-update-c926r" event={"ID":"aec32d41-0b95-4035-b092-aa00a4c57129","Type":"ContainerStarted","Data":"84e8743dddc39f9cf5d4c84b47f75ee42f2ee5464ab8c9878d053addc6122fc8"} Dec 03 11:13:12 crc kubenswrapper[4756]: I1203 11:13:12.620612 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7bvmm" event={"ID":"07a714a5-c627-43b1-8bc1-85e157c25fb0","Type":"ContainerStarted","Data":"1c940324d5ba65523d9949f99a9553833966bb61da6fe3e541bd9061ab2e8af0"} Dec 03 11:13:12 crc kubenswrapper[4756]: I1203 11:13:12.623317 4756 generic.go:334] "Generic (PLEG): container finished" podID="5cfbb2e2-672e-48fb-8916-ccb83e962bf3" containerID="aa641d82e2d87d37fdc33e5193dabda71288bd2920ab8a95dfd5a9374e40c838" exitCode=0 Dec 03 11:13:12 crc kubenswrapper[4756]: I1203 11:13:12.623427 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zbzgq" event={"ID":"5cfbb2e2-672e-48fb-8916-ccb83e962bf3","Type":"ContainerDied","Data":"aa641d82e2d87d37fdc33e5193dabda71288bd2920ab8a95dfd5a9374e40c838"} Dec 03 11:13:12 crc kubenswrapper[4756]: I1203 11:13:12.627369 4756 generic.go:334] "Generic (PLEG): container finished" podID="44226660-d2e9-4c13-bb9c-e2113bc0fcec" containerID="aa15435aa579c192c0725626f9ac8b92b6782c9128e0b3a0ce477ac15e2830cc" exitCode=0 Dec 03 11:13:12 crc kubenswrapper[4756]: I1203 11:13:12.627455 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-srkv7" event={"ID":"44226660-d2e9-4c13-bb9c-e2113bc0fcec","Type":"ContainerDied","Data":"aa15435aa579c192c0725626f9ac8b92b6782c9128e0b3a0ce477ac15e2830cc"} Dec 03 11:13:12 crc kubenswrapper[4756]: I1203 11:13:12.630539 4756 generic.go:334] "Generic (PLEG): container finished" podID="e512a844-d4d9-4bc3-88bb-a0f11fb31821" containerID="cfb9e6ced005cec1574a8e9f5edc9f907cf9ced9a01eb7398a35e00f64842c5c" exitCode=0 Dec 03 11:13:12 crc kubenswrapper[4756]: I1203 11:13:12.630611 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a266-account-create-update-x49nv" event={"ID":"e512a844-d4d9-4bc3-88bb-a0f11fb31821","Type":"ContainerDied","Data":"cfb9e6ced005cec1574a8e9f5edc9f907cf9ced9a01eb7398a35e00f64842c5c"} Dec 03 11:13:12 crc kubenswrapper[4756]: I1203 11:13:12.632754 4756 generic.go:334] "Generic (PLEG): container finished" podID="fcc6158c-bc38-42b6-9fbc-2bd15e907f48" containerID="cfff8ca639b1b655b0282f0a8213e104e3656815007e3478757393abbfa685d2" exitCode=0 Dec 03 11:13:12 crc kubenswrapper[4756]: I1203 11:13:12.632843 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tjhsk" event={"ID":"fcc6158c-bc38-42b6-9fbc-2bd15e907f48","Type":"ContainerDied","Data":"cfff8ca639b1b655b0282f0a8213e104e3656815007e3478757393abbfa685d2"} Dec 03 11:13:12 crc kubenswrapper[4756]: I1203 11:13:12.632883 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tjhsk" event={"ID":"fcc6158c-bc38-42b6-9fbc-2bd15e907f48","Type":"ContainerStarted","Data":"5b73fe9fb6164ccc157b446e113d9d692e389e1d67138244760b034e3fc294e4"} Dec 03 11:13:13 crc kubenswrapper[4756]: I1203 11:13:13.643386 4756 generic.go:334] "Generic (PLEG): container finished" podID="b8770c2d-c514-44e9-99d6-c8713f7f9ab1" containerID="94249f4558f22695e46c685db1ad9481d55734fa5f8731fab5c3968bd41e0d49" exitCode=0 Dec 03 11:13:13 crc kubenswrapper[4756]: I1203 11:13:13.643602 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b8770c2d-c514-44e9-99d6-c8713f7f9ab1","Type":"ContainerDied","Data":"94249f4558f22695e46c685db1ad9481d55734fa5f8731fab5c3968bd41e0d49"} Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.101187 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a266-account-create-update-x49nv" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.188722 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdqk\" (UniqueName: \"kubernetes.io/projected/e512a844-d4d9-4bc3-88bb-a0f11fb31821-kube-api-access-8tdqk\") pod \"e512a844-d4d9-4bc3-88bb-a0f11fb31821\" (UID: \"e512a844-d4d9-4bc3-88bb-a0f11fb31821\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.188880 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e512a844-d4d9-4bc3-88bb-a0f11fb31821-operator-scripts\") pod \"e512a844-d4d9-4bc3-88bb-a0f11fb31821\" (UID: \"e512a844-d4d9-4bc3-88bb-a0f11fb31821\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.191398 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e512a844-d4d9-4bc3-88bb-a0f11fb31821-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e512a844-d4d9-4bc3-88bb-a0f11fb31821" (UID: "e512a844-d4d9-4bc3-88bb-a0f11fb31821"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.191654 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e512a844-d4d9-4bc3-88bb-a0f11fb31821-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.201751 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e512a844-d4d9-4bc3-88bb-a0f11fb31821-kube-api-access-8tdqk" (OuterVolumeSpecName: "kube-api-access-8tdqk") pod "e512a844-d4d9-4bc3-88bb-a0f11fb31821" (UID: "e512a844-d4d9-4bc3-88bb-a0f11fb31821"). InnerVolumeSpecName "kube-api-access-8tdqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.296313 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdqk\" (UniqueName: \"kubernetes.io/projected/e512a844-d4d9-4bc3-88bb-a0f11fb31821-kube-api-access-8tdqk\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.330602 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.336508 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-srkv7" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.346360 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b30f-account-create-update-c926r" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.381558 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tjhsk" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.398003 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95cvz\" (UniqueName: \"kubernetes.io/projected/44226660-d2e9-4c13-bb9c-e2113bc0fcec-kube-api-access-95cvz\") pod \"44226660-d2e9-4c13-bb9c-e2113bc0fcec\" (UID: \"44226660-d2e9-4c13-bb9c-e2113bc0fcec\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.398074 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec32d41-0b95-4035-b092-aa00a4c57129-operator-scripts\") pod \"aec32d41-0b95-4035-b092-aa00a4c57129\" (UID: \"aec32d41-0b95-4035-b092-aa00a4c57129\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.398143 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-combined-ca-bundle\") pod \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.398170 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxgbf\" (UniqueName: \"kubernetes.io/projected/aec32d41-0b95-4035-b092-aa00a4c57129-kube-api-access-kxgbf\") pod \"aec32d41-0b95-4035-b092-aa00a4c57129\" (UID: \"aec32d41-0b95-4035-b092-aa00a4c57129\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.398211 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-swiftconf\") pod \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.398236 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-etc-swift\") pod \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.398338 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms2b9\" (UniqueName: \"kubernetes.io/projected/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-kube-api-access-ms2b9\") pod \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.398440 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44226660-d2e9-4c13-bb9c-e2113bc0fcec-operator-scripts\") pod \"44226660-d2e9-4c13-bb9c-e2113bc0fcec\" (UID: \"44226660-d2e9-4c13-bb9c-e2113bc0fcec\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.398494 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-ring-data-devices\") pod \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.398535 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-scripts\") pod \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.398612 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-dispersionconf\") pod \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\" (UID: \"5cfbb2e2-672e-48fb-8916-ccb83e962bf3\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.401892 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec32d41-0b95-4035-b092-aa00a4c57129-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aec32d41-0b95-4035-b092-aa00a4c57129" (UID: "aec32d41-0b95-4035-b092-aa00a4c57129"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.402031 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44226660-d2e9-4c13-bb9c-e2113bc0fcec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44226660-d2e9-4c13-bb9c-e2113bc0fcec" (UID: "44226660-d2e9-4c13-bb9c-e2113bc0fcec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.402137 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5cfbb2e2-672e-48fb-8916-ccb83e962bf3" (UID: "5cfbb2e2-672e-48fb-8916-ccb83e962bf3"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.407758 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-kube-api-access-ms2b9" (OuterVolumeSpecName: "kube-api-access-ms2b9") pod "5cfbb2e2-672e-48fb-8916-ccb83e962bf3" (UID: "5cfbb2e2-672e-48fb-8916-ccb83e962bf3"). InnerVolumeSpecName "kube-api-access-ms2b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.411367 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5cfbb2e2-672e-48fb-8916-ccb83e962bf3" (UID: "5cfbb2e2-672e-48fb-8916-ccb83e962bf3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.413822 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44226660-d2e9-4c13-bb9c-e2113bc0fcec-kube-api-access-95cvz" (OuterVolumeSpecName: "kube-api-access-95cvz") pod "44226660-d2e9-4c13-bb9c-e2113bc0fcec" (UID: "44226660-d2e9-4c13-bb9c-e2113bc0fcec"). InnerVolumeSpecName "kube-api-access-95cvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.413909 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec32d41-0b95-4035-b092-aa00a4c57129-kube-api-access-kxgbf" (OuterVolumeSpecName: "kube-api-access-kxgbf") pod "aec32d41-0b95-4035-b092-aa00a4c57129" (UID: "aec32d41-0b95-4035-b092-aa00a4c57129"). InnerVolumeSpecName "kube-api-access-kxgbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.431905 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5cfbb2e2-672e-48fb-8916-ccb83e962bf3" (UID: "5cfbb2e2-672e-48fb-8916-ccb83e962bf3"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.443526 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cfbb2e2-672e-48fb-8916-ccb83e962bf3" (UID: "5cfbb2e2-672e-48fb-8916-ccb83e962bf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.455832 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-scripts" (OuterVolumeSpecName: "scripts") pod "5cfbb2e2-672e-48fb-8916-ccb83e962bf3" (UID: "5cfbb2e2-672e-48fb-8916-ccb83e962bf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.476438 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5cfbb2e2-672e-48fb-8916-ccb83e962bf3" (UID: "5cfbb2e2-672e-48fb-8916-ccb83e962bf3"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.500502 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n46sq\" (UniqueName: \"kubernetes.io/projected/fcc6158c-bc38-42b6-9fbc-2bd15e907f48-kube-api-access-n46sq\") pod \"fcc6158c-bc38-42b6-9fbc-2bd15e907f48\" (UID: \"fcc6158c-bc38-42b6-9fbc-2bd15e907f48\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.500832 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcc6158c-bc38-42b6-9fbc-2bd15e907f48-operator-scripts\") pod \"fcc6158c-bc38-42b6-9fbc-2bd15e907f48\" (UID: \"fcc6158c-bc38-42b6-9fbc-2bd15e907f48\") " Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.501277 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95cvz\" (UniqueName: \"kubernetes.io/projected/44226660-d2e9-4c13-bb9c-e2113bc0fcec-kube-api-access-95cvz\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.501297 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aec32d41-0b95-4035-b092-aa00a4c57129-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.501308 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.501318 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxgbf\" (UniqueName: \"kubernetes.io/projected/aec32d41-0b95-4035-b092-aa00a4c57129-kube-api-access-kxgbf\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.501328 4756 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.501339 4756 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.501348 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms2b9\" (UniqueName: \"kubernetes.io/projected/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-kube-api-access-ms2b9\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.501357 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44226660-d2e9-4c13-bb9c-e2113bc0fcec-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.501367 4756 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.501352 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc6158c-bc38-42b6-9fbc-2bd15e907f48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fcc6158c-bc38-42b6-9fbc-2bd15e907f48" (UID: "fcc6158c-bc38-42b6-9fbc-2bd15e907f48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.501375 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.501434 4756 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5cfbb2e2-672e-48fb-8916-ccb83e962bf3-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.503492 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc6158c-bc38-42b6-9fbc-2bd15e907f48-kube-api-access-n46sq" (OuterVolumeSpecName: "kube-api-access-n46sq") pod "fcc6158c-bc38-42b6-9fbc-2bd15e907f48" (UID: "fcc6158c-bc38-42b6-9fbc-2bd15e907f48"). InnerVolumeSpecName "kube-api-access-n46sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.602927 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n46sq\" (UniqueName: \"kubernetes.io/projected/fcc6158c-bc38-42b6-9fbc-2bd15e907f48-kube-api-access-n46sq\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.602979 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcc6158c-bc38-42b6-9fbc-2bd15e907f48-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.660609 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a266-account-create-update-x49nv" event={"ID":"e512a844-d4d9-4bc3-88bb-a0f11fb31821","Type":"ContainerDied","Data":"984b83ba8f6c1d1f4875bfa7a5d0cdb1c21fce647716b099a91b51691158fa7b"} Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.660656 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a266-account-create-update-x49nv" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.660660 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="984b83ba8f6c1d1f4875bfa7a5d0cdb1c21fce647716b099a91b51691158fa7b" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.662940 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tjhsk" event={"ID":"fcc6158c-bc38-42b6-9fbc-2bd15e907f48","Type":"ContainerDied","Data":"5b73fe9fb6164ccc157b446e113d9d692e389e1d67138244760b034e3fc294e4"} Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.663025 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b73fe9fb6164ccc157b446e113d9d692e389e1d67138244760b034e3fc294e4" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.663089 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tjhsk" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.666821 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b30f-account-create-update-c926r" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.666918 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b30f-account-create-update-c926r" event={"ID":"aec32d41-0b95-4035-b092-aa00a4c57129","Type":"ContainerDied","Data":"84e8743dddc39f9cf5d4c84b47f75ee42f2ee5464ab8c9878d053addc6122fc8"} Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.667006 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84e8743dddc39f9cf5d4c84b47f75ee42f2ee5464ab8c9878d053addc6122fc8" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.677195 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b8770c2d-c514-44e9-99d6-c8713f7f9ab1","Type":"ContainerStarted","Data":"8b3dba8f90b2b5070b229cdaa84213a06e7fb07d3de9316e60d100e63611eaf1"} Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.678242 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.680423 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zbzgq" event={"ID":"5cfbb2e2-672e-48fb-8916-ccb83e962bf3","Type":"ContainerDied","Data":"b5565c746b0cac9dc768800250c351c427d3a89a57f4210d9473fa295db3c458"} Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.680464 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5565c746b0cac9dc768800250c351c427d3a89a57f4210d9473fa295db3c458" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.680538 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zbzgq" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.690109 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-srkv7" event={"ID":"44226660-d2e9-4c13-bb9c-e2113bc0fcec","Type":"ContainerDied","Data":"86a218a268656e856cde4bc44781d8e31aa9813a05133e6139b626ade4c460c3"} Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.690171 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86a218a268656e856cde4bc44781d8e31aa9813a05133e6139b626ade4c460c3" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.690263 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-srkv7" Dec 03 11:13:14 crc kubenswrapper[4756]: I1203 11:13:14.734174 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.491564341 podStartE2EDuration="1m18.734143278s" podCreationTimestamp="2025-12-03 11:11:56 +0000 UTC" firstStartedPulling="2025-12-03 11:11:58.80279257 +0000 UTC m=+1129.832793814" lastFinishedPulling="2025-12-03 11:12:40.045371507 +0000 UTC m=+1171.075372751" observedRunningTime="2025-12-03 11:13:14.715243578 +0000 UTC m=+1205.745244852" watchObservedRunningTime="2025-12-03 11:13:14.734143278 +0000 UTC m=+1205.764144522" Dec 03 11:13:15 crc kubenswrapper[4756]: I1203 11:13:15.704872 4756 generic.go:334] "Generic (PLEG): container finished" podID="4570f01f-6639-41a5-9201-c49ed4fdefa8" containerID="759a763cd38e4772ed66917e6f199f0a29ed3545148494464c028d937ead94ed" exitCode=0 Dec 03 11:13:15 crc kubenswrapper[4756]: I1203 11:13:15.706135 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4570f01f-6639-41a5-9201-c49ed4fdefa8","Type":"ContainerDied","Data":"759a763cd38e4772ed66917e6f199f0a29ed3545148494464c028d937ead94ed"} Dec 03 11:13:16 crc kubenswrapper[4756]: I1203 11:13:16.081434 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xlz9h" podUID="e033887b-a32e-4141-9812-455b70f85d39" containerName="ovn-controller" probeResult="failure" output=< Dec 03 11:13:16 crc kubenswrapper[4756]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 03 11:13:16 crc kubenswrapper[4756]: > Dec 03 11:13:16 crc kubenswrapper[4756]: I1203 11:13:16.720263 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:13:16 crc kubenswrapper[4756]: I1203 11:13:16.742007 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4570f01f-6639-41a5-9201-c49ed4fdefa8","Type":"ContainerStarted","Data":"7bd53bf9da974a60581db6630433e3a645a5cc731966117193402da4864891ca"} Dec 03 11:13:16 crc kubenswrapper[4756]: I1203 11:13:16.742288 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:13:16 crc kubenswrapper[4756]: I1203 11:13:16.746505 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-269cd" Dec 03 11:13:16 crc kubenswrapper[4756]: I1203 11:13:16.824877 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371955.029926 podStartE2EDuration="1m21.824849089s" podCreationTimestamp="2025-12-03 11:11:55 +0000 UTC" firstStartedPulling="2025-12-03 11:11:58.464462221 +0000 UTC m=+1129.494463465" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:13:16.818691917 +0000 UTC m=+1207.848693181" watchObservedRunningTime="2025-12-03 11:13:16.824849089 +0000 UTC m=+1207.854850333" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.008413 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xlz9h-config-cjwlx"] Dec 03 11:13:17 crc kubenswrapper[4756]: E1203 11:13:17.008906 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec32d41-0b95-4035-b092-aa00a4c57129" containerName="mariadb-account-create-update" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.008928 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec32d41-0b95-4035-b092-aa00a4c57129" containerName="mariadb-account-create-update" Dec 03 11:13:17 crc kubenswrapper[4756]: E1203 11:13:17.008941 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44226660-d2e9-4c13-bb9c-e2113bc0fcec" containerName="mariadb-database-create" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.009852 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="44226660-d2e9-4c13-bb9c-e2113bc0fcec" containerName="mariadb-database-create" Dec 03 11:13:17 crc kubenswrapper[4756]: E1203 11:13:17.009897 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfbb2e2-672e-48fb-8916-ccb83e962bf3" containerName="swift-ring-rebalance" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.009920 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfbb2e2-672e-48fb-8916-ccb83e962bf3" containerName="swift-ring-rebalance" Dec 03 11:13:17 crc kubenswrapper[4756]: E1203 11:13:17.009936 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e512a844-d4d9-4bc3-88bb-a0f11fb31821" containerName="mariadb-account-create-update" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.009965 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e512a844-d4d9-4bc3-88bb-a0f11fb31821" containerName="mariadb-account-create-update" Dec 03 11:13:17 crc kubenswrapper[4756]: E1203 11:13:17.009983 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc6158c-bc38-42b6-9fbc-2bd15e907f48" containerName="mariadb-database-create" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.009991 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc6158c-bc38-42b6-9fbc-2bd15e907f48" containerName="mariadb-database-create" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.010209 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc6158c-bc38-42b6-9fbc-2bd15e907f48" containerName="mariadb-database-create" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.010244 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="44226660-d2e9-4c13-bb9c-e2113bc0fcec" containerName="mariadb-database-create" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.010262 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e512a844-d4d9-4bc3-88bb-a0f11fb31821" containerName="mariadb-account-create-update" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.010278 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfbb2e2-672e-48fb-8916-ccb83e962bf3" containerName="swift-ring-rebalance" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.010292 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec32d41-0b95-4035-b092-aa00a4c57129" containerName="mariadb-account-create-update" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.011148 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.015039 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.027378 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xlz9h-config-cjwlx"] Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.127739 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-log-ovn\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.127838 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-scripts\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.127875 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-additional-scripts\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.127968 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-run-ovn\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.127999 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-run\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.128099 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv72h\" (UniqueName: \"kubernetes.io/projected/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-kube-api-access-fv72h\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.229672 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv72h\" (UniqueName: \"kubernetes.io/projected/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-kube-api-access-fv72h\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.229737 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-log-ovn\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.229793 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-scripts\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.229817 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-additional-scripts\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.229877 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-run-ovn\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.229899 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-run\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.230168 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-run\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.230178 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-run-ovn\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.230487 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-log-ovn\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.231010 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-additional-scripts\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.232431 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-scripts\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.271312 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv72h\" (UniqueName: \"kubernetes.io/projected/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-kube-api-access-fv72h\") pod \"ovn-controller-xlz9h-config-cjwlx\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:17 crc kubenswrapper[4756]: I1203 11:13:17.338621 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:18 crc kubenswrapper[4756]: I1203 11:13:18.311197 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xlz9h-config-cjwlx"] Dec 03 11:13:18 crc kubenswrapper[4756]: W1203 11:13:18.321808 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13eed6b2_cffb_4d50_ab4e_164e1abcbb94.slice/crio-46c283701fe83c50d924b43307986107e47e926b6513d2c40949c9efcff74a3f WatchSource:0}: Error finding container 46c283701fe83c50d924b43307986107e47e926b6513d2c40949c9efcff74a3f: Status 404 returned error can't find the container with id 46c283701fe83c50d924b43307986107e47e926b6513d2c40949c9efcff74a3f Dec 03 11:13:18 crc kubenswrapper[4756]: I1203 11:13:18.893602 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xlz9h-config-cjwlx" event={"ID":"13eed6b2-cffb-4d50-ab4e-164e1abcbb94","Type":"ContainerStarted","Data":"46c283701fe83c50d924b43307986107e47e926b6513d2c40949c9efcff74a3f"} Dec 03 11:13:19 crc kubenswrapper[4756]: I1203 11:13:19.908042 4756 generic.go:334] "Generic (PLEG): container finished" podID="13eed6b2-cffb-4d50-ab4e-164e1abcbb94" containerID="a28f520e94ce480a4d3fb55960962902ba76bfab9a9e17ec5191a5a39a3312c3" exitCode=0 Dec 03 11:13:19 crc kubenswrapper[4756]: I1203 11:13:19.908144 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xlz9h-config-cjwlx" event={"ID":"13eed6b2-cffb-4d50-ab4e-164e1abcbb94","Type":"ContainerDied","Data":"a28f520e94ce480a4d3fb55960962902ba76bfab9a9e17ec5191a5a39a3312c3"} Dec 03 11:13:20 crc kubenswrapper[4756]: I1203 11:13:20.981192 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-xlz9h" Dec 03 11:13:25 crc kubenswrapper[4756]: I1203 11:13:25.864834 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:13:25 crc kubenswrapper[4756]: I1203 11:13:25.880554 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/99507e0d-929b-4d13-b820-5fd2869d776e-etc-swift\") pod \"swift-storage-0\" (UID: \"99507e0d-929b-4d13-b820-5fd2869d776e\") " pod="openstack/swift-storage-0" Dec 03 11:13:25 crc kubenswrapper[4756]: I1203 11:13:25.968074 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 03 11:13:27 crc kubenswrapper[4756]: I1203 11:13:27.610362 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="4570f01f-6639-41a5-9201-c49ed4fdefa8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Dec 03 11:13:27 crc kubenswrapper[4756]: I1203 11:13:27.781381 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.133324 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-nnw8s"] Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.136977 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nnw8s" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.191103 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-nnw8s"] Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.199645 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1-operator-scripts\") pod \"barbican-db-create-nnw8s\" (UID: \"3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1\") " pod="openstack/barbican-db-create-nnw8s" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.199715 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqpmb\" (UniqueName: \"kubernetes.io/projected/3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1-kube-api-access-kqpmb\") pod \"barbican-db-create-nnw8s\" (UID: \"3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1\") " pod="openstack/barbican-db-create-nnw8s" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.301727 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1-operator-scripts\") pod \"barbican-db-create-nnw8s\" (UID: \"3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1\") " pod="openstack/barbican-db-create-nnw8s" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.301820 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqpmb\" (UniqueName: \"kubernetes.io/projected/3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1-kube-api-access-kqpmb\") pod \"barbican-db-create-nnw8s\" (UID: \"3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1\") " pod="openstack/barbican-db-create-nnw8s" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.303391 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1-operator-scripts\") pod \"barbican-db-create-nnw8s\" (UID: \"3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1\") " pod="openstack/barbican-db-create-nnw8s" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.310127 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4964-account-create-update-hzmkz"] Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.311404 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4964-account-create-update-hzmkz" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.313825 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.328106 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4964-account-create-update-hzmkz"] Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.333865 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqpmb\" (UniqueName: \"kubernetes.io/projected/3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1-kube-api-access-kqpmb\") pod \"barbican-db-create-nnw8s\" (UID: \"3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1\") " pod="openstack/barbican-db-create-nnw8s" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.416121 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-vn6kk"] Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.417385 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vn6kk" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.434033 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vn6kk"] Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.522502 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnz6k\" (UniqueName: \"kubernetes.io/projected/f637997b-3cd4-4576-abd8-385f1d6484f5-kube-api-access-fnz6k\") pod \"cinder-db-create-vn6kk\" (UID: \"f637997b-3cd4-4576-abd8-385f1d6484f5\") " pod="openstack/cinder-db-create-vn6kk" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.522605 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60c9450f-ddea-4125-97b0-1099d718ec93-operator-scripts\") pod \"barbican-4964-account-create-update-hzmkz\" (UID: \"60c9450f-ddea-4125-97b0-1099d718ec93\") " pod="openstack/barbican-4964-account-create-update-hzmkz" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.522631 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f637997b-3cd4-4576-abd8-385f1d6484f5-operator-scripts\") pod \"cinder-db-create-vn6kk\" (UID: \"f637997b-3cd4-4576-abd8-385f1d6484f5\") " pod="openstack/cinder-db-create-vn6kk" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.522669 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn4jg\" (UniqueName: \"kubernetes.io/projected/60c9450f-ddea-4125-97b0-1099d718ec93-kube-api-access-zn4jg\") pod \"barbican-4964-account-create-update-hzmkz\" (UID: \"60c9450f-ddea-4125-97b0-1099d718ec93\") " pod="openstack/barbican-4964-account-create-update-hzmkz" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.524028 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nnw8s" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.563504 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jqzxs"] Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.569389 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jqzxs" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.592059 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jqzxs"] Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.608726 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-pxhw5"] Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.610611 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pxhw5" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.724306 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.724503 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-b4cbg" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.724682 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.724901 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.735337 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60c9450f-ddea-4125-97b0-1099d718ec93-operator-scripts\") pod \"barbican-4964-account-create-update-hzmkz\" (UID: \"60c9450f-ddea-4125-97b0-1099d718ec93\") " pod="openstack/barbican-4964-account-create-update-hzmkz" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.736091 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f637997b-3cd4-4576-abd8-385f1d6484f5-operator-scripts\") pod \"cinder-db-create-vn6kk\" (UID: \"f637997b-3cd4-4576-abd8-385f1d6484f5\") " pod="openstack/cinder-db-create-vn6kk" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.736219 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn4jg\" (UniqueName: \"kubernetes.io/projected/60c9450f-ddea-4125-97b0-1099d718ec93-kube-api-access-zn4jg\") pod \"barbican-4964-account-create-update-hzmkz\" (UID: \"60c9450f-ddea-4125-97b0-1099d718ec93\") " pod="openstack/barbican-4964-account-create-update-hzmkz" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.736395 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnz6k\" (UniqueName: \"kubernetes.io/projected/f637997b-3cd4-4576-abd8-385f1d6484f5-kube-api-access-fnz6k\") pod \"cinder-db-create-vn6kk\" (UID: \"f637997b-3cd4-4576-abd8-385f1d6484f5\") " pod="openstack/cinder-db-create-vn6kk" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.741136 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f637997b-3cd4-4576-abd8-385f1d6484f5-operator-scripts\") pod \"cinder-db-create-vn6kk\" (UID: \"f637997b-3cd4-4576-abd8-385f1d6484f5\") " pod="openstack/cinder-db-create-vn6kk" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.799464 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fc3a-account-create-update-jqp99"] Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.800724 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn4jg\" (UniqueName: \"kubernetes.io/projected/60c9450f-ddea-4125-97b0-1099d718ec93-kube-api-access-zn4jg\") pod \"barbican-4964-account-create-update-hzmkz\" (UID: \"60c9450f-ddea-4125-97b0-1099d718ec93\") " pod="openstack/barbican-4964-account-create-update-hzmkz" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.803239 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fc3a-account-create-update-jqp99" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.800555 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60c9450f-ddea-4125-97b0-1099d718ec93-operator-scripts\") pod \"barbican-4964-account-create-update-hzmkz\" (UID: \"60c9450f-ddea-4125-97b0-1099d718ec93\") " pod="openstack/barbican-4964-account-create-update-hzmkz" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.810553 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.820163 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pxhw5"] Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.822897 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnz6k\" (UniqueName: \"kubernetes.io/projected/f637997b-3cd4-4576-abd8-385f1d6484f5-kube-api-access-fnz6k\") pod \"cinder-db-create-vn6kk\" (UID: \"f637997b-3cd4-4576-abd8-385f1d6484f5\") " pod="openstack/cinder-db-create-vn6kk" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.835996 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fc3a-account-create-update-jqp99"] Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.840623 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpgh9\" (UniqueName: \"kubernetes.io/projected/f676418d-e449-4e99-a7f7-f1f0d89590fe-kube-api-access-hpgh9\") pod \"keystone-db-sync-pxhw5\" (UID: \"f676418d-e449-4e99-a7f7-f1f0d89590fe\") " pod="openstack/keystone-db-sync-pxhw5" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.840761 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f676418d-e449-4e99-a7f7-f1f0d89590fe-config-data\") pod \"keystone-db-sync-pxhw5\" (UID: \"f676418d-e449-4e99-a7f7-f1f0d89590fe\") " pod="openstack/keystone-db-sync-pxhw5" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.840839 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f676418d-e449-4e99-a7f7-f1f0d89590fe-combined-ca-bundle\") pod \"keystone-db-sync-pxhw5\" (UID: \"f676418d-e449-4e99-a7f7-f1f0d89590fe\") " pod="openstack/keystone-db-sync-pxhw5" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.840975 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjs85\" (UniqueName: \"kubernetes.io/projected/9dfd4e44-d633-40a6-a398-1ce5e264393a-kube-api-access-gjs85\") pod \"neutron-db-create-jqzxs\" (UID: \"9dfd4e44-d633-40a6-a398-1ce5e264393a\") " pod="openstack/neutron-db-create-jqzxs" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.841053 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxm96\" (UniqueName: \"kubernetes.io/projected/604e757c-4c85-4eab-ac45-12c3236c3d1b-kube-api-access-kxm96\") pod \"cinder-fc3a-account-create-update-jqp99\" (UID: \"604e757c-4c85-4eab-ac45-12c3236c3d1b\") " pod="openstack/cinder-fc3a-account-create-update-jqp99" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.841185 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/604e757c-4c85-4eab-ac45-12c3236c3d1b-operator-scripts\") pod \"cinder-fc3a-account-create-update-jqp99\" (UID: \"604e757c-4c85-4eab-ac45-12c3236c3d1b\") " pod="openstack/cinder-fc3a-account-create-update-jqp99" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.841313 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dfd4e44-d633-40a6-a398-1ce5e264393a-operator-scripts\") pod \"neutron-db-create-jqzxs\" (UID: \"9dfd4e44-d633-40a6-a398-1ce5e264393a\") " pod="openstack/neutron-db-create-jqzxs" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.896131 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-538b-account-create-update-ff9pf"] Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.909897 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-538b-account-create-update-ff9pf" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.913022 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.934210 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4964-account-create-update-hzmkz" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.943625 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxm96\" (UniqueName: \"kubernetes.io/projected/604e757c-4c85-4eab-ac45-12c3236c3d1b-kube-api-access-kxm96\") pod \"cinder-fc3a-account-create-update-jqp99\" (UID: \"604e757c-4c85-4eab-ac45-12c3236c3d1b\") " pod="openstack/cinder-fc3a-account-create-update-jqp99" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.943720 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z8wt\" (UniqueName: \"kubernetes.io/projected/e1877248-2c59-4f2f-b6a7-325fbac239a0-kube-api-access-7z8wt\") pod \"neutron-538b-account-create-update-ff9pf\" (UID: \"e1877248-2c59-4f2f-b6a7-325fbac239a0\") " pod="openstack/neutron-538b-account-create-update-ff9pf" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.943783 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/604e757c-4c85-4eab-ac45-12c3236c3d1b-operator-scripts\") pod \"cinder-fc3a-account-create-update-jqp99\" (UID: \"604e757c-4c85-4eab-ac45-12c3236c3d1b\") " pod="openstack/cinder-fc3a-account-create-update-jqp99" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.943845 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1877248-2c59-4f2f-b6a7-325fbac239a0-operator-scripts\") pod \"neutron-538b-account-create-update-ff9pf\" (UID: \"e1877248-2c59-4f2f-b6a7-325fbac239a0\") " pod="openstack/neutron-538b-account-create-update-ff9pf" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.943900 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dfd4e44-d633-40a6-a398-1ce5e264393a-operator-scripts\") pod \"neutron-db-create-jqzxs\" (UID: \"9dfd4e44-d633-40a6-a398-1ce5e264393a\") " pod="openstack/neutron-db-create-jqzxs" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.945094 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/604e757c-4c85-4eab-ac45-12c3236c3d1b-operator-scripts\") pod \"cinder-fc3a-account-create-update-jqp99\" (UID: \"604e757c-4c85-4eab-ac45-12c3236c3d1b\") " pod="openstack/cinder-fc3a-account-create-update-jqp99" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.947065 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dfd4e44-d633-40a6-a398-1ce5e264393a-operator-scripts\") pod \"neutron-db-create-jqzxs\" (UID: \"9dfd4e44-d633-40a6-a398-1ce5e264393a\") " pod="openstack/neutron-db-create-jqzxs" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.948840 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpgh9\" (UniqueName: \"kubernetes.io/projected/f676418d-e449-4e99-a7f7-f1f0d89590fe-kube-api-access-hpgh9\") pod \"keystone-db-sync-pxhw5\" (UID: \"f676418d-e449-4e99-a7f7-f1f0d89590fe\") " pod="openstack/keystone-db-sync-pxhw5" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.949037 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f676418d-e449-4e99-a7f7-f1f0d89590fe-config-data\") pod \"keystone-db-sync-pxhw5\" (UID: \"f676418d-e449-4e99-a7f7-f1f0d89590fe\") " pod="openstack/keystone-db-sync-pxhw5" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.949155 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f676418d-e449-4e99-a7f7-f1f0d89590fe-combined-ca-bundle\") pod \"keystone-db-sync-pxhw5\" (UID: \"f676418d-e449-4e99-a7f7-f1f0d89590fe\") " pod="openstack/keystone-db-sync-pxhw5" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.949343 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjs85\" (UniqueName: \"kubernetes.io/projected/9dfd4e44-d633-40a6-a398-1ce5e264393a-kube-api-access-gjs85\") pod \"neutron-db-create-jqzxs\" (UID: \"9dfd4e44-d633-40a6-a398-1ce5e264393a\") " pod="openstack/neutron-db-create-jqzxs" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.954890 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-538b-account-create-update-ff9pf"] Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.958636 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f676418d-e449-4e99-a7f7-f1f0d89590fe-config-data\") pod \"keystone-db-sync-pxhw5\" (UID: \"f676418d-e449-4e99-a7f7-f1f0d89590fe\") " pod="openstack/keystone-db-sync-pxhw5" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.999074 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjs85\" (UniqueName: \"kubernetes.io/projected/9dfd4e44-d633-40a6-a398-1ce5e264393a-kube-api-access-gjs85\") pod \"neutron-db-create-jqzxs\" (UID: \"9dfd4e44-d633-40a6-a398-1ce5e264393a\") " pod="openstack/neutron-db-create-jqzxs" Dec 03 11:13:28 crc kubenswrapper[4756]: I1203 11:13:28.999537 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxm96\" (UniqueName: \"kubernetes.io/projected/604e757c-4c85-4eab-ac45-12c3236c3d1b-kube-api-access-kxm96\") pod \"cinder-fc3a-account-create-update-jqp99\" (UID: \"604e757c-4c85-4eab-ac45-12c3236c3d1b\") " pod="openstack/cinder-fc3a-account-create-update-jqp99" Dec 03 11:13:29 crc kubenswrapper[4756]: I1203 11:13:29.003336 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f676418d-e449-4e99-a7f7-f1f0d89590fe-combined-ca-bundle\") pod \"keystone-db-sync-pxhw5\" (UID: \"f676418d-e449-4e99-a7f7-f1f0d89590fe\") " pod="openstack/keystone-db-sync-pxhw5" Dec 03 11:13:29 crc kubenswrapper[4756]: I1203 11:13:29.017549 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpgh9\" (UniqueName: \"kubernetes.io/projected/f676418d-e449-4e99-a7f7-f1f0d89590fe-kube-api-access-hpgh9\") pod \"keystone-db-sync-pxhw5\" (UID: \"f676418d-e449-4e99-a7f7-f1f0d89590fe\") " pod="openstack/keystone-db-sync-pxhw5" Dec 03 11:13:29 crc kubenswrapper[4756]: I1203 11:13:29.051384 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vn6kk" Dec 03 11:13:29 crc kubenswrapper[4756]: I1203 11:13:29.052229 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z8wt\" (UniqueName: \"kubernetes.io/projected/e1877248-2c59-4f2f-b6a7-325fbac239a0-kube-api-access-7z8wt\") pod \"neutron-538b-account-create-update-ff9pf\" (UID: \"e1877248-2c59-4f2f-b6a7-325fbac239a0\") " pod="openstack/neutron-538b-account-create-update-ff9pf" Dec 03 11:13:29 crc kubenswrapper[4756]: I1203 11:13:29.052341 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1877248-2c59-4f2f-b6a7-325fbac239a0-operator-scripts\") pod \"neutron-538b-account-create-update-ff9pf\" (UID: \"e1877248-2c59-4f2f-b6a7-325fbac239a0\") " pod="openstack/neutron-538b-account-create-update-ff9pf" Dec 03 11:13:29 crc kubenswrapper[4756]: I1203 11:13:29.053619 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1877248-2c59-4f2f-b6a7-325fbac239a0-operator-scripts\") pod \"neutron-538b-account-create-update-ff9pf\" (UID: \"e1877248-2c59-4f2f-b6a7-325fbac239a0\") " pod="openstack/neutron-538b-account-create-update-ff9pf" Dec 03 11:13:29 crc kubenswrapper[4756]: I1203 11:13:29.082553 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z8wt\" (UniqueName: \"kubernetes.io/projected/e1877248-2c59-4f2f-b6a7-325fbac239a0-kube-api-access-7z8wt\") pod \"neutron-538b-account-create-update-ff9pf\" (UID: \"e1877248-2c59-4f2f-b6a7-325fbac239a0\") " pod="openstack/neutron-538b-account-create-update-ff9pf" Dec 03 11:13:29 crc kubenswrapper[4756]: I1203 11:13:29.085357 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-538b-account-create-update-ff9pf" Dec 03 11:13:29 crc kubenswrapper[4756]: I1203 11:13:29.191666 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pxhw5" Dec 03 11:13:29 crc kubenswrapper[4756]: I1203 11:13:29.204427 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jqzxs" Dec 03 11:13:29 crc kubenswrapper[4756]: I1203 11:13:29.210374 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fc3a-account-create-update-jqp99" Dec 03 11:13:33 crc kubenswrapper[4756]: E1203 11:13:33.785364 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 03 11:13:33 crc kubenswrapper[4756]: E1203 11:13:33.786177 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwszb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-7bvmm_openstack(07a714a5-c627-43b1-8bc1-85e157c25fb0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:13:33 crc kubenswrapper[4756]: E1203 11:13:33.787542 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-7bvmm" podUID="07a714a5-c627-43b1-8bc1-85e157c25fb0" Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.833592 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.951811 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv72h\" (UniqueName: \"kubernetes.io/projected/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-kube-api-access-fv72h\") pod \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.952193 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-run\") pod \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.952265 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-run-ovn\") pod \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.952316 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-log-ovn\") pod \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.952374 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-additional-scripts\") pod \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.952388 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-run" (OuterVolumeSpecName: "var-run") pod "13eed6b2-cffb-4d50-ab4e-164e1abcbb94" (UID: "13eed6b2-cffb-4d50-ab4e-164e1abcbb94"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.952441 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-scripts\") pod \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\" (UID: \"13eed6b2-cffb-4d50-ab4e-164e1abcbb94\") " Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.952454 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "13eed6b2-cffb-4d50-ab4e-164e1abcbb94" (UID: "13eed6b2-cffb-4d50-ab4e-164e1abcbb94"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.952474 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "13eed6b2-cffb-4d50-ab4e-164e1abcbb94" (UID: "13eed6b2-cffb-4d50-ab4e-164e1abcbb94"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.953250 4756 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-run\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.953272 4756 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.953283 4756 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.953586 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "13eed6b2-cffb-4d50-ab4e-164e1abcbb94" (UID: "13eed6b2-cffb-4d50-ab4e-164e1abcbb94"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.955585 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-scripts" (OuterVolumeSpecName: "scripts") pod "13eed6b2-cffb-4d50-ab4e-164e1abcbb94" (UID: "13eed6b2-cffb-4d50-ab4e-164e1abcbb94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:33 crc kubenswrapper[4756]: I1203 11:13:33.963742 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-kube-api-access-fv72h" (OuterVolumeSpecName: "kube-api-access-fv72h") pod "13eed6b2-cffb-4d50-ab4e-164e1abcbb94" (UID: "13eed6b2-cffb-4d50-ab4e-164e1abcbb94"). InnerVolumeSpecName "kube-api-access-fv72h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:34 crc kubenswrapper[4756]: I1203 11:13:34.055838 4756 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:34 crc kubenswrapper[4756]: I1203 11:13:34.055870 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:34 crc kubenswrapper[4756]: I1203 11:13:34.055880 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv72h\" (UniqueName: \"kubernetes.io/projected/13eed6b2-cffb-4d50-ab4e-164e1abcbb94-kube-api-access-fv72h\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:34 crc kubenswrapper[4756]: I1203 11:13:34.177510 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xlz9h-config-cjwlx" Dec 03 11:13:34 crc kubenswrapper[4756]: I1203 11:13:34.178855 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xlz9h-config-cjwlx" event={"ID":"13eed6b2-cffb-4d50-ab4e-164e1abcbb94","Type":"ContainerDied","Data":"46c283701fe83c50d924b43307986107e47e926b6513d2c40949c9efcff74a3f"} Dec 03 11:13:34 crc kubenswrapper[4756]: I1203 11:13:34.178914 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46c283701fe83c50d924b43307986107e47e926b6513d2c40949c9efcff74a3f" Dec 03 11:13:34 crc kubenswrapper[4756]: E1203 11:13:34.180047 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-7bvmm" podUID="07a714a5-c627-43b1-8bc1-85e157c25fb0" Dec 03 11:13:34 crc kubenswrapper[4756]: I1203 11:13:34.777494 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 03 11:13:34 crc kubenswrapper[4756]: I1203 11:13:34.827992 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jqzxs"] Dec 03 11:13:34 crc kubenswrapper[4756]: I1203 11:13:34.994729 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4964-account-create-update-hzmkz"] Dec 03 11:13:35 crc kubenswrapper[4756]: W1203 11:13:35.024491 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60c9450f_ddea_4125_97b0_1099d718ec93.slice/crio-f9ccc91994833899a183d6695f6cf27cef48715db6daeb51ee38f50201b3574b WatchSource:0}: Error finding container f9ccc91994833899a183d6695f6cf27cef48715db6daeb51ee38f50201b3574b: Status 404 returned error can't find the container with id f9ccc91994833899a183d6695f6cf27cef48715db6daeb51ee38f50201b3574b Dec 03 11:13:35 crc kubenswrapper[4756]: I1203 11:13:35.024604 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xlz9h-config-cjwlx"] Dec 03 11:13:35 crc kubenswrapper[4756]: I1203 11:13:35.041392 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pxhw5"] Dec 03 11:13:35 crc kubenswrapper[4756]: I1203 11:13:35.056100 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xlz9h-config-cjwlx"] Dec 03 11:13:35 crc kubenswrapper[4756]: I1203 11:13:35.066248 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-538b-account-create-update-ff9pf"] Dec 03 11:13:35 crc kubenswrapper[4756]: I1203 11:13:35.073390 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fc3a-account-create-update-jqp99"] Dec 03 11:13:35 crc kubenswrapper[4756]: I1203 11:13:35.176296 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vn6kk"] Dec 03 11:13:35 crc kubenswrapper[4756]: W1203 11:13:35.187904 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf637997b_3cd4_4576_abd8_385f1d6484f5.slice/crio-c8e188f9f06f7eac64aec60608f7cbedef1ec456d58d9e7c613ce42304d5f836 WatchSource:0}: Error finding container c8e188f9f06f7eac64aec60608f7cbedef1ec456d58d9e7c613ce42304d5f836: Status 404 returned error can't find the container with id c8e188f9f06f7eac64aec60608f7cbedef1ec456d58d9e7c613ce42304d5f836 Dec 03 11:13:35 crc kubenswrapper[4756]: I1203 11:13:35.188112 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4964-account-create-update-hzmkz" event={"ID":"60c9450f-ddea-4125-97b0-1099d718ec93","Type":"ContainerStarted","Data":"f9ccc91994833899a183d6695f6cf27cef48715db6daeb51ee38f50201b3574b"} Dec 03 11:13:35 crc kubenswrapper[4756]: I1203 11:13:35.188901 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-nnw8s"] Dec 03 11:13:35 crc kubenswrapper[4756]: I1203 11:13:35.189098 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-538b-account-create-update-ff9pf" event={"ID":"e1877248-2c59-4f2f-b6a7-325fbac239a0","Type":"ContainerStarted","Data":"9a4fe065bc7e9bb75ea423a7a6e30d016d46a336da4e4f01ffa46f03d7cfa5a9"} Dec 03 11:13:35 crc kubenswrapper[4756]: I1203 11:13:35.190044 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pxhw5" event={"ID":"f676418d-e449-4e99-a7f7-f1f0d89590fe","Type":"ContainerStarted","Data":"b358a905ac278bbbb2822665d9d1464185f06763e25da52ad37a4c68ab9a8f69"} Dec 03 11:13:35 crc kubenswrapper[4756]: I1203 11:13:35.190812 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fc3a-account-create-update-jqp99" event={"ID":"604e757c-4c85-4eab-ac45-12c3236c3d1b","Type":"ContainerStarted","Data":"59a7e942aa4aab909dbc2cd22559423833f5ac97aedaed763b199c4f246a2b7d"} Dec 03 11:13:35 crc kubenswrapper[4756]: I1203 11:13:35.191677 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jqzxs" event={"ID":"9dfd4e44-d633-40a6-a398-1ce5e264393a","Type":"ContainerStarted","Data":"3ebd7c00633dd86414b4aff7d9b89819117b171b241485b9be886d3757794248"} Dec 03 11:13:35 crc kubenswrapper[4756]: W1203 11:13:35.202541 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e7e60ab_3bd7_44bc_85c3_a44eeaf171d1.slice/crio-86976cda5af9633f3c026e772f33d385423d01c1e672db5182d7dc77c67da158 WatchSource:0}: Error finding container 86976cda5af9633f3c026e772f33d385423d01c1e672db5182d7dc77c67da158: Status 404 returned error can't find the container with id 86976cda5af9633f3c026e772f33d385423d01c1e672db5182d7dc77c67da158 Dec 03 11:13:35 crc kubenswrapper[4756]: I1203 11:13:35.202688 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"22b1f5ca25c2dc351c19e313ae2d575c11941d04a181ef6590fd674332c10250"} Dec 03 11:13:35 crc kubenswrapper[4756]: I1203 11:13:35.250977 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13eed6b2-cffb-4d50-ab4e-164e1abcbb94" path="/var/lib/kubelet/pods/13eed6b2-cffb-4d50-ab4e-164e1abcbb94/volumes" Dec 03 11:13:36 crc kubenswrapper[4756]: I1203 11:13:36.231362 4756 generic.go:334] "Generic (PLEG): container finished" podID="60c9450f-ddea-4125-97b0-1099d718ec93" containerID="c6f98f6d8beb1822ad45047228636f7b1f1745548851476622f17821713c4ba6" exitCode=0 Dec 03 11:13:36 crc kubenswrapper[4756]: I1203 11:13:36.231484 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4964-account-create-update-hzmkz" event={"ID":"60c9450f-ddea-4125-97b0-1099d718ec93","Type":"ContainerDied","Data":"c6f98f6d8beb1822ad45047228636f7b1f1745548851476622f17821713c4ba6"} Dec 03 11:13:36 crc kubenswrapper[4756]: I1203 11:13:36.235510 4756 generic.go:334] "Generic (PLEG): container finished" podID="e1877248-2c59-4f2f-b6a7-325fbac239a0" containerID="cf1945d2c9fef28d1709449efc0393f253159881b44f102481ba7fdadfe02d64" exitCode=0 Dec 03 11:13:36 crc kubenswrapper[4756]: I1203 11:13:36.235588 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-538b-account-create-update-ff9pf" event={"ID":"e1877248-2c59-4f2f-b6a7-325fbac239a0","Type":"ContainerDied","Data":"cf1945d2c9fef28d1709449efc0393f253159881b44f102481ba7fdadfe02d64"} Dec 03 11:13:36 crc kubenswrapper[4756]: I1203 11:13:36.240630 4756 generic.go:334] "Generic (PLEG): container finished" podID="f637997b-3cd4-4576-abd8-385f1d6484f5" containerID="e3c729765b8418f27c72e8639035bd1c5abf0f3e39683649341fea710d3e797e" exitCode=0 Dec 03 11:13:36 crc kubenswrapper[4756]: I1203 11:13:36.240695 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vn6kk" event={"ID":"f637997b-3cd4-4576-abd8-385f1d6484f5","Type":"ContainerDied","Data":"e3c729765b8418f27c72e8639035bd1c5abf0f3e39683649341fea710d3e797e"} Dec 03 11:13:36 crc kubenswrapper[4756]: I1203 11:13:36.240714 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vn6kk" event={"ID":"f637997b-3cd4-4576-abd8-385f1d6484f5","Type":"ContainerStarted","Data":"c8e188f9f06f7eac64aec60608f7cbedef1ec456d58d9e7c613ce42304d5f836"} Dec 03 11:13:36 crc kubenswrapper[4756]: I1203 11:13:36.244985 4756 generic.go:334] "Generic (PLEG): container finished" podID="604e757c-4c85-4eab-ac45-12c3236c3d1b" containerID="c698ad1d5de9deafa2502775f4994204654825fc336b97240d6676eb17d3dfce" exitCode=0 Dec 03 11:13:36 crc kubenswrapper[4756]: I1203 11:13:36.245096 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fc3a-account-create-update-jqp99" event={"ID":"604e757c-4c85-4eab-ac45-12c3236c3d1b","Type":"ContainerDied","Data":"c698ad1d5de9deafa2502775f4994204654825fc336b97240d6676eb17d3dfce"} Dec 03 11:13:36 crc kubenswrapper[4756]: I1203 11:13:36.253438 4756 generic.go:334] "Generic (PLEG): container finished" podID="9dfd4e44-d633-40a6-a398-1ce5e264393a" containerID="050316102fe570a18862dbfb8ac43cc813c9b905e7a2f0ae973c9723f06cc3fa" exitCode=0 Dec 03 11:13:36 crc kubenswrapper[4756]: I1203 11:13:36.253620 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jqzxs" event={"ID":"9dfd4e44-d633-40a6-a398-1ce5e264393a","Type":"ContainerDied","Data":"050316102fe570a18862dbfb8ac43cc813c9b905e7a2f0ae973c9723f06cc3fa"} Dec 03 11:13:36 crc kubenswrapper[4756]: I1203 11:13:36.258247 4756 generic.go:334] "Generic (PLEG): container finished" podID="3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1" containerID="28b29dfbec1a6402dff7b1bde8459389a0e6f76ddc9cb2799dd16ee6fead5050" exitCode=0 Dec 03 11:13:36 crc kubenswrapper[4756]: I1203 11:13:36.258319 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nnw8s" event={"ID":"3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1","Type":"ContainerDied","Data":"28b29dfbec1a6402dff7b1bde8459389a0e6f76ddc9cb2799dd16ee6fead5050"} Dec 03 11:13:36 crc kubenswrapper[4756]: I1203 11:13:36.258384 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nnw8s" event={"ID":"3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1","Type":"ContainerStarted","Data":"86976cda5af9633f3c026e772f33d385423d01c1e672db5182d7dc77c67da158"} Dec 03 11:13:37 crc kubenswrapper[4756]: I1203 11:13:37.274286 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"e65d6f0b1d97739d08410e28470fec5b408daf8ab27246edc1b9408106e61eba"} Dec 03 11:13:37 crc kubenswrapper[4756]: I1203 11:13:37.610437 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:13:37 crc kubenswrapper[4756]: I1203 11:13:37.866074 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jqzxs" Dec 03 11:13:38 crc kubenswrapper[4756]: I1203 11:13:38.020565 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjs85\" (UniqueName: \"kubernetes.io/projected/9dfd4e44-d633-40a6-a398-1ce5e264393a-kube-api-access-gjs85\") pod \"9dfd4e44-d633-40a6-a398-1ce5e264393a\" (UID: \"9dfd4e44-d633-40a6-a398-1ce5e264393a\") " Dec 03 11:13:38 crc kubenswrapper[4756]: I1203 11:13:38.020615 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dfd4e44-d633-40a6-a398-1ce5e264393a-operator-scripts\") pod \"9dfd4e44-d633-40a6-a398-1ce5e264393a\" (UID: \"9dfd4e44-d633-40a6-a398-1ce5e264393a\") " Dec 03 11:13:38 crc kubenswrapper[4756]: I1203 11:13:38.021833 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dfd4e44-d633-40a6-a398-1ce5e264393a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9dfd4e44-d633-40a6-a398-1ce5e264393a" (UID: "9dfd4e44-d633-40a6-a398-1ce5e264393a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:38 crc kubenswrapper[4756]: I1203 11:13:38.044184 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dfd4e44-d633-40a6-a398-1ce5e264393a-kube-api-access-gjs85" (OuterVolumeSpecName: "kube-api-access-gjs85") pod "9dfd4e44-d633-40a6-a398-1ce5e264393a" (UID: "9dfd4e44-d633-40a6-a398-1ce5e264393a"). InnerVolumeSpecName "kube-api-access-gjs85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:38 crc kubenswrapper[4756]: I1203 11:13:38.129635 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjs85\" (UniqueName: \"kubernetes.io/projected/9dfd4e44-d633-40a6-a398-1ce5e264393a-kube-api-access-gjs85\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:38 crc kubenswrapper[4756]: I1203 11:13:38.129684 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dfd4e44-d633-40a6-a398-1ce5e264393a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:38 crc kubenswrapper[4756]: I1203 11:13:38.291280 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jqzxs" event={"ID":"9dfd4e44-d633-40a6-a398-1ce5e264393a","Type":"ContainerDied","Data":"3ebd7c00633dd86414b4aff7d9b89819117b171b241485b9be886d3757794248"} Dec 03 11:13:38 crc kubenswrapper[4756]: I1203 11:13:38.292070 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ebd7c00633dd86414b4aff7d9b89819117b171b241485b9be886d3757794248" Dec 03 11:13:38 crc kubenswrapper[4756]: I1203 11:13:38.291615 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jqzxs" Dec 03 11:13:38 crc kubenswrapper[4756]: I1203 11:13:38.297573 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"c7c2121d50ae1d6c1d14072d36a9e2601786d273175aa71026885b7ee81bc971"} Dec 03 11:13:38 crc kubenswrapper[4756]: I1203 11:13:38.297668 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"0f86e83c92a56402b496282512e032c73c9e4ba29364e05303d44910c74d4495"} Dec 03 11:13:38 crc kubenswrapper[4756]: I1203 11:13:38.297707 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"8eb997a0d0410540606a469b88398c0de15b47f67fbb3e927482edc11d89b481"} Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.349066 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vn6kk" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.356602 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4964-account-create-update-hzmkz" event={"ID":"60c9450f-ddea-4125-97b0-1099d718ec93","Type":"ContainerDied","Data":"f9ccc91994833899a183d6695f6cf27cef48715db6daeb51ee38f50201b3574b"} Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.356676 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9ccc91994833899a183d6695f6cf27cef48715db6daeb51ee38f50201b3574b" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.357908 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nnw8s" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.359176 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-538b-account-create-update-ff9pf" event={"ID":"e1877248-2c59-4f2f-b6a7-325fbac239a0","Type":"ContainerDied","Data":"9a4fe065bc7e9bb75ea423a7a6e30d016d46a336da4e4f01ffa46f03d7cfa5a9"} Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.359245 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a4fe065bc7e9bb75ea423a7a6e30d016d46a336da4e4f01ffa46f03d7cfa5a9" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.364270 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-538b-account-create-update-ff9pf" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.364257 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vn6kk" event={"ID":"f637997b-3cd4-4576-abd8-385f1d6484f5","Type":"ContainerDied","Data":"c8e188f9f06f7eac64aec60608f7cbedef1ec456d58d9e7c613ce42304d5f836"} Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.364335 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vn6kk" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.364387 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8e188f9f06f7eac64aec60608f7cbedef1ec456d58d9e7c613ce42304d5f836" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.372845 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fc3a-account-create-update-jqp99" event={"ID":"604e757c-4c85-4eab-ac45-12c3236c3d1b","Type":"ContainerDied","Data":"59a7e942aa4aab909dbc2cd22559423833f5ac97aedaed763b199c4f246a2b7d"} Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.372902 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59a7e942aa4aab909dbc2cd22559423833f5ac97aedaed763b199c4f246a2b7d" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.379352 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nnw8s" event={"ID":"3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1","Type":"ContainerDied","Data":"86976cda5af9633f3c026e772f33d385423d01c1e672db5182d7dc77c67da158"} Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.379408 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nnw8s" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.379409 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86976cda5af9633f3c026e772f33d385423d01c1e672db5182d7dc77c67da158" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.400058 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnz6k\" (UniqueName: \"kubernetes.io/projected/f637997b-3cd4-4576-abd8-385f1d6484f5-kube-api-access-fnz6k\") pod \"f637997b-3cd4-4576-abd8-385f1d6484f5\" (UID: \"f637997b-3cd4-4576-abd8-385f1d6484f5\") " Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.400147 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f637997b-3cd4-4576-abd8-385f1d6484f5-operator-scripts\") pod \"f637997b-3cd4-4576-abd8-385f1d6484f5\" (UID: \"f637997b-3cd4-4576-abd8-385f1d6484f5\") " Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.400290 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqpmb\" (UniqueName: \"kubernetes.io/projected/3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1-kube-api-access-kqpmb\") pod \"3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1\" (UID: \"3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1\") " Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.400336 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z8wt\" (UniqueName: \"kubernetes.io/projected/e1877248-2c59-4f2f-b6a7-325fbac239a0-kube-api-access-7z8wt\") pod \"e1877248-2c59-4f2f-b6a7-325fbac239a0\" (UID: \"e1877248-2c59-4f2f-b6a7-325fbac239a0\") " Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.400405 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1-operator-scripts\") pod \"3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1\" (UID: \"3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1\") " Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.400488 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1877248-2c59-4f2f-b6a7-325fbac239a0-operator-scripts\") pod \"e1877248-2c59-4f2f-b6a7-325fbac239a0\" (UID: \"e1877248-2c59-4f2f-b6a7-325fbac239a0\") " Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.401910 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1877248-2c59-4f2f-b6a7-325fbac239a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1877248-2c59-4f2f-b6a7-325fbac239a0" (UID: "e1877248-2c59-4f2f-b6a7-325fbac239a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.402360 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f637997b-3cd4-4576-abd8-385f1d6484f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f637997b-3cd4-4576-abd8-385f1d6484f5" (UID: "f637997b-3cd4-4576-abd8-385f1d6484f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.402854 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1" (UID: "3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.406284 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f637997b-3cd4-4576-abd8-385f1d6484f5-kube-api-access-fnz6k" (OuterVolumeSpecName: "kube-api-access-fnz6k") pod "f637997b-3cd4-4576-abd8-385f1d6484f5" (UID: "f637997b-3cd4-4576-abd8-385f1d6484f5"). InnerVolumeSpecName "kube-api-access-fnz6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.406263 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1877248-2c59-4f2f-b6a7-325fbac239a0-kube-api-access-7z8wt" (OuterVolumeSpecName: "kube-api-access-7z8wt") pod "e1877248-2c59-4f2f-b6a7-325fbac239a0" (UID: "e1877248-2c59-4f2f-b6a7-325fbac239a0"). InnerVolumeSpecName "kube-api-access-7z8wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.412555 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1-kube-api-access-kqpmb" (OuterVolumeSpecName: "kube-api-access-kqpmb") pod "3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1" (UID: "3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1"). InnerVolumeSpecName "kube-api-access-kqpmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.481264 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4964-account-create-update-hzmkz" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.490107 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fc3a-account-create-update-jqp99" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.504492 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60c9450f-ddea-4125-97b0-1099d718ec93-operator-scripts\") pod \"60c9450f-ddea-4125-97b0-1099d718ec93\" (UID: \"60c9450f-ddea-4125-97b0-1099d718ec93\") " Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.504551 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn4jg\" (UniqueName: \"kubernetes.io/projected/60c9450f-ddea-4125-97b0-1099d718ec93-kube-api-access-zn4jg\") pod \"60c9450f-ddea-4125-97b0-1099d718ec93\" (UID: \"60c9450f-ddea-4125-97b0-1099d718ec93\") " Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.505034 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqpmb\" (UniqueName: \"kubernetes.io/projected/3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1-kube-api-access-kqpmb\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.505050 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z8wt\" (UniqueName: \"kubernetes.io/projected/e1877248-2c59-4f2f-b6a7-325fbac239a0-kube-api-access-7z8wt\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.505062 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.505072 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1877248-2c59-4f2f-b6a7-325fbac239a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.505081 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnz6k\" (UniqueName: \"kubernetes.io/projected/f637997b-3cd4-4576-abd8-385f1d6484f5-kube-api-access-fnz6k\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.505092 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f637997b-3cd4-4576-abd8-385f1d6484f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.505692 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60c9450f-ddea-4125-97b0-1099d718ec93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60c9450f-ddea-4125-97b0-1099d718ec93" (UID: "60c9450f-ddea-4125-97b0-1099d718ec93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.513205 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c9450f-ddea-4125-97b0-1099d718ec93-kube-api-access-zn4jg" (OuterVolumeSpecName: "kube-api-access-zn4jg") pod "60c9450f-ddea-4125-97b0-1099d718ec93" (UID: "60c9450f-ddea-4125-97b0-1099d718ec93"). InnerVolumeSpecName "kube-api-access-zn4jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.606223 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxm96\" (UniqueName: \"kubernetes.io/projected/604e757c-4c85-4eab-ac45-12c3236c3d1b-kube-api-access-kxm96\") pod \"604e757c-4c85-4eab-ac45-12c3236c3d1b\" (UID: \"604e757c-4c85-4eab-ac45-12c3236c3d1b\") " Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.606577 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/604e757c-4c85-4eab-ac45-12c3236c3d1b-operator-scripts\") pod \"604e757c-4c85-4eab-ac45-12c3236c3d1b\" (UID: \"604e757c-4c85-4eab-ac45-12c3236c3d1b\") " Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.606949 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60c9450f-ddea-4125-97b0-1099d718ec93-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.606987 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn4jg\" (UniqueName: \"kubernetes.io/projected/60c9450f-ddea-4125-97b0-1099d718ec93-kube-api-access-zn4jg\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.607487 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/604e757c-4c85-4eab-ac45-12c3236c3d1b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "604e757c-4c85-4eab-ac45-12c3236c3d1b" (UID: "604e757c-4c85-4eab-ac45-12c3236c3d1b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.612782 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604e757c-4c85-4eab-ac45-12c3236c3d1b-kube-api-access-kxm96" (OuterVolumeSpecName: "kube-api-access-kxm96") pod "604e757c-4c85-4eab-ac45-12c3236c3d1b" (UID: "604e757c-4c85-4eab-ac45-12c3236c3d1b"). InnerVolumeSpecName "kube-api-access-kxm96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.708799 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/604e757c-4c85-4eab-ac45-12c3236c3d1b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:42 crc kubenswrapper[4756]: I1203 11:13:42.708847 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxm96\" (UniqueName: \"kubernetes.io/projected/604e757c-4c85-4eab-ac45-12c3236c3d1b-kube-api-access-kxm96\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:43 crc kubenswrapper[4756]: I1203 11:13:43.417454 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pxhw5" event={"ID":"f676418d-e449-4e99-a7f7-f1f0d89590fe","Type":"ContainerStarted","Data":"4fbf1df278c1ea45a4be40f36bf06c7af65b37db9076a119e097db33133c40b7"} Dec 03 11:13:43 crc kubenswrapper[4756]: I1203 11:13:43.425160 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-538b-account-create-update-ff9pf" Dec 03 11:13:43 crc kubenswrapper[4756]: I1203 11:13:43.426164 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"9b8a8885084aa852460f6604f8ca580de0a894f347ce2168a35744f15ed1e110"} Dec 03 11:13:43 crc kubenswrapper[4756]: I1203 11:13:43.426237 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fc3a-account-create-update-jqp99" Dec 03 11:13:43 crc kubenswrapper[4756]: I1203 11:13:43.426586 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4964-account-create-update-hzmkz" Dec 03 11:13:43 crc kubenswrapper[4756]: I1203 11:13:43.450466 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-pxhw5" podStartSLOduration=8.220025915 podStartE2EDuration="15.450433148s" podCreationTimestamp="2025-12-03 11:13:28 +0000 UTC" firstStartedPulling="2025-12-03 11:13:35.031127891 +0000 UTC m=+1226.061129125" lastFinishedPulling="2025-12-03 11:13:42.261535114 +0000 UTC m=+1233.291536358" observedRunningTime="2025-12-03 11:13:43.439340812 +0000 UTC m=+1234.469342056" watchObservedRunningTime="2025-12-03 11:13:43.450433148 +0000 UTC m=+1234.480434412" Dec 03 11:13:44 crc kubenswrapper[4756]: I1203 11:13:44.612436 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"056824de83eb6914a92d1ae70713caf2cffbc3b3a8509074cf8e82aaa26857a4"} Dec 03 11:13:44 crc kubenswrapper[4756]: I1203 11:13:44.612482 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"de44686fb22b57ea72b23c8befc255908168d813524aefb53a5d5fce2aef3ac8"} Dec 03 11:13:44 crc kubenswrapper[4756]: I1203 11:13:44.612492 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"4a8eafc09fa42a9868821e2764f7062847453b90d4a292beae1db047996ae444"} Dec 03 11:13:56 crc kubenswrapper[4756]: I1203 11:13:56.377560 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7bvmm" event={"ID":"07a714a5-c627-43b1-8bc1-85e157c25fb0","Type":"ContainerStarted","Data":"00ac1e2785ecdf6cd26b2d3b4c4a9c1203e04b4868e378b2521588c2baa189bb"} Dec 03 11:13:56 crc kubenswrapper[4756]: I1203 11:13:56.386169 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"036caa08f72f7d166fb4c872182d7ef30915535055d9f2c0a6887804f581f3f5"} Dec 03 11:13:56 crc kubenswrapper[4756]: I1203 11:13:56.386232 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"6662f28ac44a051165b7ce3df56166b05a4661894bcd9a6c6d9bded9e249db53"} Dec 03 11:13:56 crc kubenswrapper[4756]: I1203 11:13:56.386246 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"bf1f404530912e7be1d27b15e84c72d03a849ca7f5d552d569b6d48715c4c5e3"} Dec 03 11:13:56 crc kubenswrapper[4756]: I1203 11:13:56.386257 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"fbc2642420284233f46fd56c18198b329991159fc5b957fb83298e1500540a7f"} Dec 03 11:13:56 crc kubenswrapper[4756]: I1203 11:13:56.409150 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-7bvmm" podStartSLOduration=2.533138347 podStartE2EDuration="45.409122168s" podCreationTimestamp="2025-12-03 11:13:11 +0000 UTC" firstStartedPulling="2025-12-03 11:13:12.440785568 +0000 UTC m=+1203.470786812" lastFinishedPulling="2025-12-03 11:13:55.316769389 +0000 UTC m=+1246.346770633" observedRunningTime="2025-12-03 11:13:56.397488336 +0000 UTC m=+1247.427489600" watchObservedRunningTime="2025-12-03 11:13:56.409122168 +0000 UTC m=+1247.439123412" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.408297 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"63137ff3d178e0279d62bb640c98a6a94524521476a1f2f0b6971f7e92ce958f"} Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.408365 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"92a74c76dd763faa0a627c2e4aaa628c03f4f0caa4f15b6a7993ac2729366a48"} Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.408380 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"99507e0d-929b-4d13-b820-5fd2869d776e","Type":"ContainerStarted","Data":"1f8f3dbee88a2d1a369f14693f4298b921669308dd157bd3a52f66ffc04c5fac"} Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.411618 4756 generic.go:334] "Generic (PLEG): container finished" podID="f676418d-e449-4e99-a7f7-f1f0d89590fe" containerID="4fbf1df278c1ea45a4be40f36bf06c7af65b37db9076a119e097db33133c40b7" exitCode=0 Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.411668 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pxhw5" event={"ID":"f676418d-e449-4e99-a7f7-f1f0d89590fe","Type":"ContainerDied","Data":"4fbf1df278c1ea45a4be40f36bf06c7af65b37db9076a119e097db33133c40b7"} Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.450214 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=44.928172885 podStartE2EDuration="1m5.450185792s" podCreationTimestamp="2025-12-03 11:12:52 +0000 UTC" firstStartedPulling="2025-12-03 11:13:34.792932935 +0000 UTC m=+1225.822934179" lastFinishedPulling="2025-12-03 11:13:55.314945842 +0000 UTC m=+1246.344947086" observedRunningTime="2025-12-03 11:13:57.443815103 +0000 UTC m=+1248.473816377" watchObservedRunningTime="2025-12-03 11:13:57.450185792 +0000 UTC m=+1248.480187036" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.727408 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-bxgld"] Dec 03 11:13:57 crc kubenswrapper[4756]: E1203 11:13:57.728801 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13eed6b2-cffb-4d50-ab4e-164e1abcbb94" containerName="ovn-config" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.728891 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="13eed6b2-cffb-4d50-ab4e-164e1abcbb94" containerName="ovn-config" Dec 03 11:13:57 crc kubenswrapper[4756]: E1203 11:13:57.728994 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1" containerName="mariadb-database-create" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.729087 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1" containerName="mariadb-database-create" Dec 03 11:13:57 crc kubenswrapper[4756]: E1203 11:13:57.729206 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c9450f-ddea-4125-97b0-1099d718ec93" containerName="mariadb-account-create-update" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.729291 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c9450f-ddea-4125-97b0-1099d718ec93" containerName="mariadb-account-create-update" Dec 03 11:13:57 crc kubenswrapper[4756]: E1203 11:13:57.729390 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f637997b-3cd4-4576-abd8-385f1d6484f5" containerName="mariadb-database-create" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.729465 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f637997b-3cd4-4576-abd8-385f1d6484f5" containerName="mariadb-database-create" Dec 03 11:13:57 crc kubenswrapper[4756]: E1203 11:13:57.729554 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dfd4e44-d633-40a6-a398-1ce5e264393a" containerName="mariadb-database-create" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.729697 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dfd4e44-d633-40a6-a398-1ce5e264393a" containerName="mariadb-database-create" Dec 03 11:13:57 crc kubenswrapper[4756]: E1203 11:13:57.729809 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1877248-2c59-4f2f-b6a7-325fbac239a0" containerName="mariadb-account-create-update" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.729881 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1877248-2c59-4f2f-b6a7-325fbac239a0" containerName="mariadb-account-create-update" Dec 03 11:13:57 crc kubenswrapper[4756]: E1203 11:13:57.731660 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604e757c-4c85-4eab-ac45-12c3236c3d1b" containerName="mariadb-account-create-update" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.731757 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="604e757c-4c85-4eab-ac45-12c3236c3d1b" containerName="mariadb-account-create-update" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.732231 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="604e757c-4c85-4eab-ac45-12c3236c3d1b" containerName="mariadb-account-create-update" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.732315 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1877248-2c59-4f2f-b6a7-325fbac239a0" containerName="mariadb-account-create-update" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.732387 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1" containerName="mariadb-database-create" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.732452 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="13eed6b2-cffb-4d50-ab4e-164e1abcbb94" containerName="ovn-config" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.732520 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dfd4e44-d633-40a6-a398-1ce5e264393a" containerName="mariadb-database-create" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.732582 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f637997b-3cd4-4576-abd8-385f1d6484f5" containerName="mariadb-database-create" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.732656 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c9450f-ddea-4125-97b0-1099d718ec93" containerName="mariadb-account-create-update" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.733802 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.737274 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.757254 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.757888 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-dns-svc\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.758000 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.758093 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2cvn\" (UniqueName: \"kubernetes.io/projected/22630198-decd-40e7-968f-3073bc31260b-kube-api-access-n2cvn\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.758172 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.758319 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-config\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.763805 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-bxgld"] Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.860773 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-config\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.860883 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.860929 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-dns-svc\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.860977 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.861010 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2cvn\" (UniqueName: \"kubernetes.io/projected/22630198-decd-40e7-968f-3073bc31260b-kube-api-access-n2cvn\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.861040 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.862144 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-config\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.862247 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.862245 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.862504 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-dns-svc\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.862504 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:57 crc kubenswrapper[4756]: I1203 11:13:57.884677 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2cvn\" (UniqueName: \"kubernetes.io/projected/22630198-decd-40e7-968f-3073bc31260b-kube-api-access-n2cvn\") pod \"dnsmasq-dns-764c5664d7-bxgld\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:58 crc kubenswrapper[4756]: I1203 11:13:58.060412 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:13:58 crc kubenswrapper[4756]: I1203 11:13:58.603332 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-bxgld"] Dec 03 11:13:58 crc kubenswrapper[4756]: I1203 11:13:58.833284 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pxhw5" Dec 03 11:13:58 crc kubenswrapper[4756]: I1203 11:13:58.897615 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f676418d-e449-4e99-a7f7-f1f0d89590fe-config-data\") pod \"f676418d-e449-4e99-a7f7-f1f0d89590fe\" (UID: \"f676418d-e449-4e99-a7f7-f1f0d89590fe\") " Dec 03 11:13:58 crc kubenswrapper[4756]: I1203 11:13:58.898115 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpgh9\" (UniqueName: \"kubernetes.io/projected/f676418d-e449-4e99-a7f7-f1f0d89590fe-kube-api-access-hpgh9\") pod \"f676418d-e449-4e99-a7f7-f1f0d89590fe\" (UID: \"f676418d-e449-4e99-a7f7-f1f0d89590fe\") " Dec 03 11:13:58 crc kubenswrapper[4756]: I1203 11:13:58.898334 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f676418d-e449-4e99-a7f7-f1f0d89590fe-combined-ca-bundle\") pod \"f676418d-e449-4e99-a7f7-f1f0d89590fe\" (UID: \"f676418d-e449-4e99-a7f7-f1f0d89590fe\") " Dec 03 11:13:58 crc kubenswrapper[4756]: I1203 11:13:58.908661 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f676418d-e449-4e99-a7f7-f1f0d89590fe-kube-api-access-hpgh9" (OuterVolumeSpecName: "kube-api-access-hpgh9") pod "f676418d-e449-4e99-a7f7-f1f0d89590fe" (UID: "f676418d-e449-4e99-a7f7-f1f0d89590fe"). InnerVolumeSpecName "kube-api-access-hpgh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:13:58 crc kubenswrapper[4756]: I1203 11:13:58.938061 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f676418d-e449-4e99-a7f7-f1f0d89590fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f676418d-e449-4e99-a7f7-f1f0d89590fe" (UID: "f676418d-e449-4e99-a7f7-f1f0d89590fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:13:58 crc kubenswrapper[4756]: I1203 11:13:58.970263 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f676418d-e449-4e99-a7f7-f1f0d89590fe-config-data" (OuterVolumeSpecName: "config-data") pod "f676418d-e449-4e99-a7f7-f1f0d89590fe" (UID: "f676418d-e449-4e99-a7f7-f1f0d89590fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.001658 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f676418d-e449-4e99-a7f7-f1f0d89590fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.001709 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f676418d-e449-4e99-a7f7-f1f0d89590fe-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.001722 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpgh9\" (UniqueName: \"kubernetes.io/projected/f676418d-e449-4e99-a7f7-f1f0d89590fe-kube-api-access-hpgh9\") on node \"crc\" DevicePath \"\"" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.439844 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pxhw5" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.439828 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pxhw5" event={"ID":"f676418d-e449-4e99-a7f7-f1f0d89590fe","Type":"ContainerDied","Data":"b358a905ac278bbbb2822665d9d1464185f06763e25da52ad37a4c68ab9a8f69"} Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.439992 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b358a905ac278bbbb2822665d9d1464185f06763e25da52ad37a4c68ab9a8f69" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.442182 4756 generic.go:334] "Generic (PLEG): container finished" podID="22630198-decd-40e7-968f-3073bc31260b" containerID="4be4192f9ab934da0b803dfb41b85185b70a13dc6b4f78e094af5bfe0126244c" exitCode=0 Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.442265 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-bxgld" event={"ID":"22630198-decd-40e7-968f-3073bc31260b","Type":"ContainerDied","Data":"4be4192f9ab934da0b803dfb41b85185b70a13dc6b4f78e094af5bfe0126244c"} Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.442488 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-bxgld" event={"ID":"22630198-decd-40e7-968f-3073bc31260b","Type":"ContainerStarted","Data":"48504f34c874074c6e91daf753de57795b19e86546f6f4ded36952c0cdb9046a"} Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.802819 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ltws7"] Dec 03 11:13:59 crc kubenswrapper[4756]: E1203 11:13:59.803697 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f676418d-e449-4e99-a7f7-f1f0d89590fe" containerName="keystone-db-sync" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.803714 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f676418d-e449-4e99-a7f7-f1f0d89590fe" containerName="keystone-db-sync" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.804273 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f676418d-e449-4e99-a7f7-f1f0d89590fe" containerName="keystone-db-sync" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.804968 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.807476 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.807761 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.807861 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-b4cbg" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.808033 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.826771 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.827469 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-bxgld"] Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.851395 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ltws7"] Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.916112 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-combined-ca-bundle\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.916336 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-fernet-keys\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.916534 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-credential-keys\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.916621 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k75sj\" (UniqueName: \"kubernetes.io/projected/c6c322e0-0115-4279-a8bd-2ad910f06ccd-kube-api-access-k75sj\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.916645 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-scripts\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.916681 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-config-data\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.923599 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-vddd8"] Dec 03 11:13:59 crc kubenswrapper[4756]: I1203 11:13:59.925513 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.018125 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-credential-keys\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.018205 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k75sj\" (UniqueName: \"kubernetes.io/projected/c6c322e0-0115-4279-a8bd-2ad910f06ccd-kube-api-access-k75sj\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.018229 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-scripts\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.018254 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-config-data\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.018306 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-combined-ca-bundle\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.018362 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-fernet-keys\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.033149 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-fernet-keys\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.034197 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-config-data\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.035212 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-credential-keys\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.065809 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-combined-ca-bundle\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.124915 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-config\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.125498 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmzlq\" (UniqueName: \"kubernetes.io/projected/3473515e-02bd-4c83-abe8-f7e2870f9b47-kube-api-access-gmzlq\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.125558 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.125610 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-dns-svc\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.125635 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.125688 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.156232 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7fh72"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.158800 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.162656 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.163038 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.163207 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xngvc" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.229438 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.229586 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-config\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.229650 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmzlq\" (UniqueName: \"kubernetes.io/projected/3473515e-02bd-4c83-abe8-f7e2870f9b47-kube-api-access-gmzlq\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.229684 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.233653 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-config\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.238891 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k75sj\" (UniqueName: \"kubernetes.io/projected/c6c322e0-0115-4279-a8bd-2ad910f06ccd-kube-api-access-k75sj\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.242163 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.243431 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.267931 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-dns-svc\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.268404 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.269105 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-dns-svc\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.269335 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-scripts\") pod \"keystone-bootstrap-ltws7\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.272181 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.307646 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-vddd8"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.339804 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmzlq\" (UniqueName: \"kubernetes.io/projected/3473515e-02bd-4c83-abe8-f7e2870f9b47-kube-api-access-gmzlq\") pod \"dnsmasq-dns-5959f8865f-vddd8\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.349899 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7fh72"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.375250 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-db-sync-config-data\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.375343 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps5lm\" (UniqueName: \"kubernetes.io/projected/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-kube-api-access-ps5lm\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.375367 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-scripts\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.375394 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-combined-ca-bundle\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.375435 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-etc-machine-id\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.375472 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-config-data\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.381988 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-qvbgz"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.383863 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qvbgz" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.391023 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-g2fts" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.398583 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.426456 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.431241 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b64b795f7-zbjz6"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.436065 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.453482 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-pbr8c" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.454111 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.454456 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.454763 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.472161 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qvbgz"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.478154 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818fc868-fe09-4a91-aab2-91f11bac7386-db-sync-config-data\") pod \"barbican-db-sync-qvbgz\" (UID: \"818fc868-fe09-4a91-aab2-91f11bac7386\") " pod="openstack/barbican-db-sync-qvbgz" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.478203 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-config-data\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.478243 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818fc868-fe09-4a91-aab2-91f11bac7386-combined-ca-bundle\") pod \"barbican-db-sync-qvbgz\" (UID: \"818fc868-fe09-4a91-aab2-91f11bac7386\") " pod="openstack/barbican-db-sync-qvbgz" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.478271 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-scripts\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.478313 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-db-sync-config-data\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.478341 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-logs\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.478377 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps5lm\" (UniqueName: \"kubernetes.io/projected/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-kube-api-access-ps5lm\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.478394 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-scripts\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.478413 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-combined-ca-bundle\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.478439 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-horizon-secret-key\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.478458 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd5c5\" (UniqueName: \"kubernetes.io/projected/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-kube-api-access-fd5c5\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.478481 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x59f6\" (UniqueName: \"kubernetes.io/projected/818fc868-fe09-4a91-aab2-91f11bac7386-kube-api-access-x59f6\") pod \"barbican-db-sync-qvbgz\" (UID: \"818fc868-fe09-4a91-aab2-91f11bac7386\") " pod="openstack/barbican-db-sync-qvbgz" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.478502 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-etc-machine-id\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.478536 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-config-data\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.483913 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-etc-machine-id\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.486637 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-db-sync-config-data\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.489556 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-scripts\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.496268 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-combined-ca-bundle\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.501161 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-config-data\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.504006 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b64b795f7-zbjz6"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.515390 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-bxgld" event={"ID":"22630198-decd-40e7-968f-3073bc31260b","Type":"ContainerStarted","Data":"c6c902de5d1928dc2682561feaf2b6d67d3b19142a1a2ec4f15dede91da70d88"} Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.516383 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.538203 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps5lm\" (UniqueName: \"kubernetes.io/projected/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-kube-api-access-ps5lm\") pod \"cinder-db-sync-7fh72\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.567047 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-sxfpk"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.568689 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sxfpk" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.581020 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-lr48q"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.582142 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.583345 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee66879-3546-4e2b-9737-ddb96741650f-combined-ca-bundle\") pod \"neutron-db-sync-sxfpk\" (UID: \"aee66879-3546-4e2b-9737-ddb96741650f\") " pod="openstack/neutron-db-sync-sxfpk" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.583378 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-config-data\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.583422 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx8fm\" (UniqueName: \"kubernetes.io/projected/aee66879-3546-4e2b-9737-ddb96741650f-kube-api-access-xx8fm\") pod \"neutron-db-sync-sxfpk\" (UID: \"aee66879-3546-4e2b-9737-ddb96741650f\") " pod="openstack/neutron-db-sync-sxfpk" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.583441 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818fc868-fe09-4a91-aab2-91f11bac7386-combined-ca-bundle\") pod \"barbican-db-sync-qvbgz\" (UID: \"818fc868-fe09-4a91-aab2-91f11bac7386\") " pod="openstack/barbican-db-sync-qvbgz" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.583482 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-scripts\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.583528 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-logs\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.583588 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aee66879-3546-4e2b-9737-ddb96741650f-config\") pod \"neutron-db-sync-sxfpk\" (UID: \"aee66879-3546-4e2b-9737-ddb96741650f\") " pod="openstack/neutron-db-sync-sxfpk" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.583625 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-horizon-secret-key\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.583651 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd5c5\" (UniqueName: \"kubernetes.io/projected/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-kube-api-access-fd5c5\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.583672 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x59f6\" (UniqueName: \"kubernetes.io/projected/818fc868-fe09-4a91-aab2-91f11bac7386-kube-api-access-x59f6\") pod \"barbican-db-sync-qvbgz\" (UID: \"818fc868-fe09-4a91-aab2-91f11bac7386\") " pod="openstack/barbican-db-sync-qvbgz" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.583727 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818fc868-fe09-4a91-aab2-91f11bac7386-db-sync-config-data\") pod \"barbican-db-sync-qvbgz\" (UID: \"818fc868-fe09-4a91-aab2-91f11bac7386\") " pod="openstack/barbican-db-sync-qvbgz" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.585342 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.585895 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.586003 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nmqwq" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.587541 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-scripts\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.593585 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-config-data\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.594300 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.595253 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-logs\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.604882 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wldm4" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.605118 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.605264 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.606844 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sxfpk"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.607903 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818fc868-fe09-4a91-aab2-91f11bac7386-combined-ca-bundle\") pod \"barbican-db-sync-qvbgz\" (UID: \"818fc868-fe09-4a91-aab2-91f11bac7386\") " pod="openstack/barbican-db-sync-qvbgz" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.618584 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7fh72" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.626628 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-horizon-secret-key\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.627065 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818fc868-fe09-4a91-aab2-91f11bac7386-db-sync-config-data\") pod \"barbican-db-sync-qvbgz\" (UID: \"818fc868-fe09-4a91-aab2-91f11bac7386\") " pod="openstack/barbican-db-sync-qvbgz" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.634818 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x59f6\" (UniqueName: \"kubernetes.io/projected/818fc868-fe09-4a91-aab2-91f11bac7386-kube-api-access-x59f6\") pod \"barbican-db-sync-qvbgz\" (UID: \"818fc868-fe09-4a91-aab2-91f11bac7386\") " pod="openstack/barbican-db-sync-qvbgz" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.640143 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd5c5\" (UniqueName: \"kubernetes.io/projected/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-kube-api-access-fd5c5\") pod \"horizon-b64b795f7-zbjz6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.640490 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lr48q"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.662786 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-vddd8"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.689482 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aee66879-3546-4e2b-9737-ddb96741650f-config\") pod \"neutron-db-sync-sxfpk\" (UID: \"aee66879-3546-4e2b-9737-ddb96741650f\") " pod="openstack/neutron-db-sync-sxfpk" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.689600 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee66879-3546-4e2b-9737-ddb96741650f-combined-ca-bundle\") pod \"neutron-db-sync-sxfpk\" (UID: \"aee66879-3546-4e2b-9737-ddb96741650f\") " pod="openstack/neutron-db-sync-sxfpk" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.690333 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx8fm\" (UniqueName: \"kubernetes.io/projected/aee66879-3546-4e2b-9737-ddb96741650f-kube-api-access-xx8fm\") pod \"neutron-db-sync-sxfpk\" (UID: \"aee66879-3546-4e2b-9737-ddb96741650f\") " pod="openstack/neutron-db-sync-sxfpk" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.705698 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.711387 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee66879-3546-4e2b-9737-ddb96741650f-combined-ca-bundle\") pod \"neutron-db-sync-sxfpk\" (UID: \"aee66879-3546-4e2b-9737-ddb96741650f\") " pod="openstack/neutron-db-sync-sxfpk" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.718030 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.719534 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx8fm\" (UniqueName: \"kubernetes.io/projected/aee66879-3546-4e2b-9737-ddb96741650f-kube-api-access-xx8fm\") pod \"neutron-db-sync-sxfpk\" (UID: \"aee66879-3546-4e2b-9737-ddb96741650f\") " pod="openstack/neutron-db-sync-sxfpk" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.730250 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qvbgz" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.733748 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.734010 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.752033 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.753114 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/aee66879-3546-4e2b-9737-ddb96741650f-config\") pod \"neutron-db-sync-sxfpk\" (UID: \"aee66879-3546-4e2b-9737-ddb96741650f\") " pod="openstack/neutron-db-sync-sxfpk" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.800040 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.826379 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-run-httpd\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.826900 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-config-data\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.826944 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-config-data\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.827093 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.827142 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-log-httpd\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.827265 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ceb74cf-6023-4536-bc04-6667b5f48967-logs\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.827515 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-combined-ca-bundle\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.827783 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crb5r\" (UniqueName: \"kubernetes.io/projected/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-kube-api-access-crb5r\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.827821 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-scripts\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.827866 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.827898 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-scripts\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.827941 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlh4m\" (UniqueName: \"kubernetes.io/projected/9ceb74cf-6023-4536-bc04-6667b5f48967-kube-api-access-jlh4m\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.833011 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-4r422"] Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.834854 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.936534 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-combined-ca-bundle\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.936606 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crb5r\" (UniqueName: \"kubernetes.io/projected/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-kube-api-access-crb5r\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.936627 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-scripts\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.936666 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.936684 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-scripts\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.936703 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlh4m\" (UniqueName: \"kubernetes.io/projected/9ceb74cf-6023-4536-bc04-6667b5f48967-kube-api-access-jlh4m\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.936729 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-run-httpd\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.936749 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-config-data\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.936766 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-config-data\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.936797 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.936821 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-log-httpd\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.936854 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ceb74cf-6023-4536-bc04-6667b5f48967-logs\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.937519 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ceb74cf-6023-4536-bc04-6667b5f48967-logs\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.938224 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-run-httpd\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.946028 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-config-data\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.947917 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-scripts\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.948599 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.948936 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-log-httpd\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.952554 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-combined-ca-bundle\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.958230 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-scripts\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.970854 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sxfpk" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.981442 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-config-data\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:00 crc kubenswrapper[4756]: I1203 11:14:00.999295 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crb5r\" (UniqueName: \"kubernetes.io/projected/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-kube-api-access-crb5r\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.002649 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " pod="openstack/ceilometer-0" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.005108 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlh4m\" (UniqueName: \"kubernetes.io/projected/9ceb74cf-6023-4536-bc04-6667b5f48967-kube-api-access-jlh4m\") pod \"placement-db-sync-lr48q\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.031871 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-4r422"] Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.039783 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.045260 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7698f66497-h2r7v"] Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.047866 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.051236 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.051399 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.051451 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.051492 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6hbd\" (UniqueName: \"kubernetes.io/projected/cd94f14d-05e4-4d74-befd-c508535db058-kube-api-access-p6hbd\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.051635 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.051839 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-config\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.098039 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7698f66497-h2r7v"] Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.113036 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-bxgld" podStartSLOduration=4.113005563 podStartE2EDuration="4.113005563s" podCreationTimestamp="2025-12-03 11:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:00.5736376 +0000 UTC m=+1251.603638854" watchObservedRunningTime="2025-12-03 11:14:01.113005563 +0000 UTC m=+1252.143006807" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.118864 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.162771 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.162855 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.162886 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0d239b0-29b1-442f-894f-e2e64a8f1d08-logs\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.162912 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.162938 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6hbd\" (UniqueName: \"kubernetes.io/projected/cd94f14d-05e4-4d74-befd-c508535db058-kube-api-access-p6hbd\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.163118 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqnm7\" (UniqueName: \"kubernetes.io/projected/f0d239b0-29b1-442f-894f-e2e64a8f1d08-kube-api-access-pqnm7\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.163142 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.163203 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0d239b0-29b1-442f-894f-e2e64a8f1d08-horizon-secret-key\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.163231 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-config\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.163269 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0d239b0-29b1-442f-894f-e2e64a8f1d08-config-data\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.163310 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0d239b0-29b1-442f-894f-e2e64a8f1d08-scripts\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.169585 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.171246 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.172150 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.182721 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.188576 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-config\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.189565 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6hbd\" (UniqueName: \"kubernetes.io/projected/cd94f14d-05e4-4d74-befd-c508535db058-kube-api-access-p6hbd\") pod \"dnsmasq-dns-58dd9ff6bc-4r422\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.215156 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.271531 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0d239b0-29b1-442f-894f-e2e64a8f1d08-horizon-secret-key\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.271657 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0d239b0-29b1-442f-894f-e2e64a8f1d08-config-data\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.271731 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0d239b0-29b1-442f-894f-e2e64a8f1d08-scripts\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.271890 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0d239b0-29b1-442f-894f-e2e64a8f1d08-logs\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.272020 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqnm7\" (UniqueName: \"kubernetes.io/projected/f0d239b0-29b1-442f-894f-e2e64a8f1d08-kube-api-access-pqnm7\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.274084 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0d239b0-29b1-442f-894f-e2e64a8f1d08-logs\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.274230 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0d239b0-29b1-442f-894f-e2e64a8f1d08-scripts\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.275736 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0d239b0-29b1-442f-894f-e2e64a8f1d08-config-data\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.282408 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0d239b0-29b1-442f-894f-e2e64a8f1d08-horizon-secret-key\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.297730 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqnm7\" (UniqueName: \"kubernetes.io/projected/f0d239b0-29b1-442f-894f-e2e64a8f1d08-kube-api-access-pqnm7\") pod \"horizon-7698f66497-h2r7v\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.545110 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ltws7"] Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.574909 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-bxgld" podUID="22630198-decd-40e7-968f-3073bc31260b" containerName="dnsmasq-dns" containerID="cri-o://c6c902de5d1928dc2682561feaf2b6d67d3b19142a1a2ec4f15dede91da70d88" gracePeriod=10 Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.583805 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.685309 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7fh72"] Dec 03 11:14:01 crc kubenswrapper[4756]: I1203 11:14:01.770974 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-vddd8"] Dec 03 11:14:01 crc kubenswrapper[4756]: W1203 11:14:01.838681 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa7e078c_dfed_40c1_ac1c_d9db28aa9d96.slice/crio-dea6f57540fdff2bd6327650a7624c48ed2850917a780d8d9f435e9203e3a229 WatchSource:0}: Error finding container dea6f57540fdff2bd6327650a7624c48ed2850917a780d8d9f435e9203e3a229: Status 404 returned error can't find the container with id dea6f57540fdff2bd6327650a7624c48ed2850917a780d8d9f435e9203e3a229 Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.333101 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qvbgz"] Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.352227 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sxfpk"] Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.406583 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.522749 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-4r422"] Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.553119 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lr48q"] Dec 03 11:14:02 crc kubenswrapper[4756]: W1203 11:14:02.604154 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ceb74cf_6023_4536_bc04_6667b5f48967.slice/crio-f6cd0655dc51beb3169af403853a47d1b05d0e98ef554ebca7fb811e862646ef WatchSource:0}: Error finding container f6cd0655dc51beb3169af403853a47d1b05d0e98ef554ebca7fb811e862646ef: Status 404 returned error can't find the container with id f6cd0655dc51beb3169af403853a47d1b05d0e98ef554ebca7fb811e862646ef Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.609492 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b64b795f7-zbjz6"] Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.612748 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ltws7" event={"ID":"c6c322e0-0115-4279-a8bd-2ad910f06ccd","Type":"ContainerStarted","Data":"49d48b50de161a7320a345f0e3295068f7200ceb79182a6f737c41f2039bedad"} Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.612816 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ltws7" event={"ID":"c6c322e0-0115-4279-a8bd-2ad910f06ccd","Type":"ContainerStarted","Data":"16817f639927f553e32414a330cc41721e60c98103ee9e810fab268ba35f4e22"} Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.649304 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-vddd8" event={"ID":"3473515e-02bd-4c83-abe8-f7e2870f9b47","Type":"ContainerStarted","Data":"3ccbfbeaf8d2b2e4fce52ff0a938b0799a1044f9e598c984bfd2fd22a56861a4"} Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.649372 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-vddd8" event={"ID":"3473515e-02bd-4c83-abe8-f7e2870f9b47","Type":"ContainerStarted","Data":"c12b275a8db19eca44c01b2b3827a3e61c1a158457b504f075f9e27e8788e334"} Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.691874 4756 generic.go:334] "Generic (PLEG): container finished" podID="22630198-decd-40e7-968f-3073bc31260b" containerID="c6c902de5d1928dc2682561feaf2b6d67d3b19142a1a2ec4f15dede91da70d88" exitCode=0 Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.691973 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-bxgld" event={"ID":"22630198-decd-40e7-968f-3073bc31260b","Type":"ContainerDied","Data":"c6c902de5d1928dc2682561feaf2b6d67d3b19142a1a2ec4f15dede91da70d88"} Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.739261 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qvbgz" event={"ID":"818fc868-fe09-4a91-aab2-91f11bac7386","Type":"ContainerStarted","Data":"b0152bf8149cb719e30e54d2913d162b841b91dbf1b51132398b4b6fecfb3142"} Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.742058 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ltws7" podStartSLOduration=3.7420137909999998 podStartE2EDuration="3.742013791s" podCreationTimestamp="2025-12-03 11:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:02.658986357 +0000 UTC m=+1253.688987601" watchObservedRunningTime="2025-12-03 11:14:02.742013791 +0000 UTC m=+1253.772015045" Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.766124 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7fh72" event={"ID":"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96","Type":"ContainerStarted","Data":"dea6f57540fdff2bd6327650a7624c48ed2850917a780d8d9f435e9203e3a229"} Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.774592 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04","Type":"ContainerStarted","Data":"319b8dc922af5291124e96ff3e937299659e85a274f9428c91834987f245d817"} Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.776135 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sxfpk" event={"ID":"aee66879-3546-4e2b-9737-ddb96741650f","Type":"ContainerStarted","Data":"08605044c2eb71093900678ac7ed97a8f14073dbc8aa940340b2f6dc75cd7298"} Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.776876 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" event={"ID":"cd94f14d-05e4-4d74-befd-c508535db058","Type":"ContainerStarted","Data":"fcb7c24ce1107e0a286cb16bcb7535050149ebd694e516e290e313a20f758f94"} Dec 03 11:14:02 crc kubenswrapper[4756]: I1203 11:14:02.880918 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7698f66497-h2r7v"] Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.115144 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7698f66497-h2r7v"] Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.182971 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7fc99cbb77-f5wd5"] Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.214812 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7fc99cbb77-f5wd5"] Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.214943 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.267323 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.277509 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.373819 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-dns-swift-storage-0\") pod \"22630198-decd-40e7-968f-3073bc31260b\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.373970 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-ovsdbserver-nb\") pod \"22630198-decd-40e7-968f-3073bc31260b\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.374051 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2cvn\" (UniqueName: \"kubernetes.io/projected/22630198-decd-40e7-968f-3073bc31260b-kube-api-access-n2cvn\") pod \"22630198-decd-40e7-968f-3073bc31260b\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.374115 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-ovsdbserver-sb\") pod \"22630198-decd-40e7-968f-3073bc31260b\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.374139 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-dns-svc\") pod \"22630198-decd-40e7-968f-3073bc31260b\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.374220 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-config\") pod \"22630198-decd-40e7-968f-3073bc31260b\" (UID: \"22630198-decd-40e7-968f-3073bc31260b\") " Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.374676 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qnht\" (UniqueName: \"kubernetes.io/projected/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-kube-api-access-2qnht\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.374741 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-config-data\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.374789 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-horizon-secret-key\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.374903 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-scripts\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.374919 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-logs\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.419176 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.432881 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22630198-decd-40e7-968f-3073bc31260b-kube-api-access-n2cvn" (OuterVolumeSpecName: "kube-api-access-n2cvn") pod "22630198-decd-40e7-968f-3073bc31260b" (UID: "22630198-decd-40e7-968f-3073bc31260b"). InnerVolumeSpecName "kube-api-access-n2cvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.492421 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-config-data\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.492510 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-horizon-secret-key\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.492584 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-scripts\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.492604 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-logs\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.492656 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qnht\" (UniqueName: \"kubernetes.io/projected/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-kube-api-access-2qnht\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.492750 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2cvn\" (UniqueName: \"kubernetes.io/projected/22630198-decd-40e7-968f-3073bc31260b-kube-api-access-n2cvn\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.492850 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "22630198-decd-40e7-968f-3073bc31260b" (UID: "22630198-decd-40e7-968f-3073bc31260b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.493842 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-scripts\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.499450 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-config-data\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.500842 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-horizon-secret-key\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.530917 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qnht\" (UniqueName: \"kubernetes.io/projected/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-kube-api-access-2qnht\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.531052 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-logs\") pod \"horizon-7fc99cbb77-f5wd5\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.564134 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.565750 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-config" (OuterVolumeSpecName: "config") pod "22630198-decd-40e7-968f-3073bc31260b" (UID: "22630198-decd-40e7-968f-3073bc31260b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.586763 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "22630198-decd-40e7-968f-3073bc31260b" (UID: "22630198-decd-40e7-968f-3073bc31260b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.594098 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-dns-svc\") pod \"3473515e-02bd-4c83-abe8-f7e2870f9b47\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.594174 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-config\") pod \"3473515e-02bd-4c83-abe8-f7e2870f9b47\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.594231 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-dns-swift-storage-0\") pod \"3473515e-02bd-4c83-abe8-f7e2870f9b47\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.594334 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-ovsdbserver-sb\") pod \"3473515e-02bd-4c83-abe8-f7e2870f9b47\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.594461 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmzlq\" (UniqueName: \"kubernetes.io/projected/3473515e-02bd-4c83-abe8-f7e2870f9b47-kube-api-access-gmzlq\") pod \"3473515e-02bd-4c83-abe8-f7e2870f9b47\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.594517 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-ovsdbserver-nb\") pod \"3473515e-02bd-4c83-abe8-f7e2870f9b47\" (UID: \"3473515e-02bd-4c83-abe8-f7e2870f9b47\") " Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.595053 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.595075 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.595089 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.600975 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "22630198-decd-40e7-968f-3073bc31260b" (UID: "22630198-decd-40e7-968f-3073bc31260b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.630512 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3473515e-02bd-4c83-abe8-f7e2870f9b47-kube-api-access-gmzlq" (OuterVolumeSpecName: "kube-api-access-gmzlq") pod "3473515e-02bd-4c83-abe8-f7e2870f9b47" (UID: "3473515e-02bd-4c83-abe8-f7e2870f9b47"). InnerVolumeSpecName "kube-api-access-gmzlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.651828 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "22630198-decd-40e7-968f-3073bc31260b" (UID: "22630198-decd-40e7-968f-3073bc31260b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.666986 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3473515e-02bd-4c83-abe8-f7e2870f9b47" (UID: "3473515e-02bd-4c83-abe8-f7e2870f9b47"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.667986 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3473515e-02bd-4c83-abe8-f7e2870f9b47" (UID: "3473515e-02bd-4c83-abe8-f7e2870f9b47"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.696615 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmzlq\" (UniqueName: \"kubernetes.io/projected/3473515e-02bd-4c83-abe8-f7e2870f9b47-kube-api-access-gmzlq\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.696653 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.696662 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.696671 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22630198-decd-40e7-968f-3073bc31260b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.696682 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.705681 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-config" (OuterVolumeSpecName: "config") pod "3473515e-02bd-4c83-abe8-f7e2870f9b47" (UID: "3473515e-02bd-4c83-abe8-f7e2870f9b47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.713172 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3473515e-02bd-4c83-abe8-f7e2870f9b47" (UID: "3473515e-02bd-4c83-abe8-f7e2870f9b47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.717917 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3473515e-02bd-4c83-abe8-f7e2870f9b47" (UID: "3473515e-02bd-4c83-abe8-f7e2870f9b47"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.802061 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.802099 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.802113 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3473515e-02bd-4c83-abe8-f7e2870f9b47-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.803872 4756 generic.go:334] "Generic (PLEG): container finished" podID="cd94f14d-05e4-4d74-befd-c508535db058" containerID="e59088fa63f99d325d2e3a0574be4d54f6a507149e33211d2c773c0cef4561c5" exitCode=0 Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.804023 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" event={"ID":"cd94f14d-05e4-4d74-befd-c508535db058","Type":"ContainerDied","Data":"e59088fa63f99d325d2e3a0574be4d54f6a507149e33211d2c773c0cef4561c5"} Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.872830 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sxfpk" event={"ID":"aee66879-3546-4e2b-9737-ddb96741650f","Type":"ContainerStarted","Data":"c3ba64230151da45c102b293c7a8c1af0adf91a4a5ec8fd83d6fbdff8791ff39"} Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.897329 4756 generic.go:334] "Generic (PLEG): container finished" podID="3473515e-02bd-4c83-abe8-f7e2870f9b47" containerID="3ccbfbeaf8d2b2e4fce52ff0a938b0799a1044f9e598c984bfd2fd22a56861a4" exitCode=0 Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.897548 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-vddd8" Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.901065 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-vddd8" event={"ID":"3473515e-02bd-4c83-abe8-f7e2870f9b47","Type":"ContainerDied","Data":"3ccbfbeaf8d2b2e4fce52ff0a938b0799a1044f9e598c984bfd2fd22a56861a4"} Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.901127 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-vddd8" event={"ID":"3473515e-02bd-4c83-abe8-f7e2870f9b47","Type":"ContainerDied","Data":"c12b275a8db19eca44c01b2b3827a3e61c1a158457b504f075f9e27e8788e334"} Dec 03 11:14:03 crc kubenswrapper[4756]: I1203 11:14:03.901147 4756 scope.go:117] "RemoveContainer" containerID="3ccbfbeaf8d2b2e4fce52ff0a938b0799a1044f9e598c984bfd2fd22a56861a4" Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.038326 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-bxgld" event={"ID":"22630198-decd-40e7-968f-3073bc31260b","Type":"ContainerDied","Data":"48504f34c874074c6e91daf753de57795b19e86546f6f4ded36952c0cdb9046a"} Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.038483 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-bxgld" Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.053285 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b64b795f7-zbjz6" event={"ID":"40a7a9df-3b33-4670-a7e6-d6594e21a1f6","Type":"ContainerStarted","Data":"26d89b80d9e1f388be4395e8f97aa7d90d72cea7036554f68f808c50904b232b"} Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.061796 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lr48q" event={"ID":"9ceb74cf-6023-4536-bc04-6667b5f48967","Type":"ContainerStarted","Data":"f6cd0655dc51beb3169af403853a47d1b05d0e98ef554ebca7fb811e862646ef"} Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.067945 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-sxfpk" podStartSLOduration=4.067921508 podStartE2EDuration="4.067921508s" podCreationTimestamp="2025-12-03 11:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:03.989436326 +0000 UTC m=+1255.019437570" watchObservedRunningTime="2025-12-03 11:14:04.067921508 +0000 UTC m=+1255.097922742" Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.086908 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7698f66497-h2r7v" event={"ID":"f0d239b0-29b1-442f-894f-e2e64a8f1d08","Type":"ContainerStarted","Data":"f1fe92c259c03731e7a9a09f14d440e9d71e287d982df8905054a9da8bc66e69"} Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.190273 4756 scope.go:117] "RemoveContainer" containerID="3ccbfbeaf8d2b2e4fce52ff0a938b0799a1044f9e598c984bfd2fd22a56861a4" Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.190510 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-vddd8"] Dec 03 11:14:04 crc kubenswrapper[4756]: E1203 11:14:04.193225 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ccbfbeaf8d2b2e4fce52ff0a938b0799a1044f9e598c984bfd2fd22a56861a4\": container with ID starting with 3ccbfbeaf8d2b2e4fce52ff0a938b0799a1044f9e598c984bfd2fd22a56861a4 not found: ID does not exist" containerID="3ccbfbeaf8d2b2e4fce52ff0a938b0799a1044f9e598c984bfd2fd22a56861a4" Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.193272 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ccbfbeaf8d2b2e4fce52ff0a938b0799a1044f9e598c984bfd2fd22a56861a4"} err="failed to get container status \"3ccbfbeaf8d2b2e4fce52ff0a938b0799a1044f9e598c984bfd2fd22a56861a4\": rpc error: code = NotFound desc = could not find container \"3ccbfbeaf8d2b2e4fce52ff0a938b0799a1044f9e598c984bfd2fd22a56861a4\": container with ID starting with 3ccbfbeaf8d2b2e4fce52ff0a938b0799a1044f9e598c984bfd2fd22a56861a4 not found: ID does not exist" Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.193321 4756 scope.go:117] "RemoveContainer" containerID="c6c902de5d1928dc2682561feaf2b6d67d3b19142a1a2ec4f15dede91da70d88" Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.201275 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-vddd8"] Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.226740 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-bxgld"] Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.232043 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-bxgld"] Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.302300 4756 scope.go:117] "RemoveContainer" containerID="4be4192f9ab934da0b803dfb41b85185b70a13dc6b4f78e094af5bfe0126244c" Dec 03 11:14:04 crc kubenswrapper[4756]: I1203 11:14:04.304738 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7fc99cbb77-f5wd5"] Dec 03 11:14:05 crc kubenswrapper[4756]: I1203 11:14:05.107079 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fc99cbb77-f5wd5" event={"ID":"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f","Type":"ContainerStarted","Data":"1490b2ceff5077647aa96001dd5601e6b381ae28d4adf1ed834bd4ca3025ea39"} Dec 03 11:14:05 crc kubenswrapper[4756]: I1203 11:14:05.111943 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" event={"ID":"cd94f14d-05e4-4d74-befd-c508535db058","Type":"ContainerStarted","Data":"5c3bb3f106b58476397ce489e98dd51c8c5a2c085a983f84edfea1a1dcea6ab8"} Dec 03 11:14:05 crc kubenswrapper[4756]: I1203 11:14:05.113622 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:05 crc kubenswrapper[4756]: I1203 11:14:05.149809 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" podStartSLOduration=5.14978362 podStartE2EDuration="5.14978362s" podCreationTimestamp="2025-12-03 11:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:05.146053265 +0000 UTC m=+1256.176054509" watchObservedRunningTime="2025-12-03 11:14:05.14978362 +0000 UTC m=+1256.179784864" Dec 03 11:14:05 crc kubenswrapper[4756]: I1203 11:14:05.255611 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22630198-decd-40e7-968f-3073bc31260b" path="/var/lib/kubelet/pods/22630198-decd-40e7-968f-3073bc31260b/volumes" Dec 03 11:14:05 crc kubenswrapper[4756]: I1203 11:14:05.256872 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3473515e-02bd-4c83-abe8-f7e2870f9b47" path="/var/lib/kubelet/pods/3473515e-02bd-4c83-abe8-f7e2870f9b47/volumes" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.480827 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b64b795f7-zbjz6"] Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.557720 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5f6f9857bb-rncmk"] Dec 03 11:14:09 crc kubenswrapper[4756]: E1203 11:14:09.561636 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22630198-decd-40e7-968f-3073bc31260b" containerName="init" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.561770 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="22630198-decd-40e7-968f-3073bc31260b" containerName="init" Dec 03 11:14:09 crc kubenswrapper[4756]: E1203 11:14:09.561899 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22630198-decd-40e7-968f-3073bc31260b" containerName="dnsmasq-dns" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.562058 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="22630198-decd-40e7-968f-3073bc31260b" containerName="dnsmasq-dns" Dec 03 11:14:09 crc kubenswrapper[4756]: E1203 11:14:09.562280 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3473515e-02bd-4c83-abe8-f7e2870f9b47" containerName="init" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.562444 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3473515e-02bd-4c83-abe8-f7e2870f9b47" containerName="init" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.563375 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3473515e-02bd-4c83-abe8-f7e2870f9b47" containerName="init" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.563543 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="22630198-decd-40e7-968f-3073bc31260b" containerName="dnsmasq-dns" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.570085 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.582833 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.625018 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f6f9857bb-rncmk"] Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.733336 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-horizon-secret-key\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.733427 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lwwm\" (UniqueName: \"kubernetes.io/projected/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-kube-api-access-5lwwm\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.733567 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-horizon-tls-certs\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.733679 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-combined-ca-bundle\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.733809 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-scripts\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.733938 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-config-data\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.734061 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-logs\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.765603 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7fc99cbb77-f5wd5"] Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.805909 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66bc647888-tcn4m"] Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.809599 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.843620 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-scripts\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.843755 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-config-data\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.843840 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-logs\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.844048 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-horizon-secret-key\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.844141 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lwwm\" (UniqueName: \"kubernetes.io/projected/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-kube-api-access-5lwwm\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.844179 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-horizon-tls-certs\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.844241 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-combined-ca-bundle\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.847310 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66bc647888-tcn4m"] Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.847942 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-logs\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.849297 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-scripts\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.850816 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-config-data\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.857776 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-horizon-tls-certs\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.857870 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-horizon-secret-key\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.879517 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-combined-ca-bundle\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.883275 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lwwm\" (UniqueName: \"kubernetes.io/projected/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-kube-api-access-5lwwm\") pod \"horizon-5f6f9857bb-rncmk\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.923566 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.952380 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/00c35a0d-70b4-453d-974a-85b638505280-horizon-tls-certs\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.952520 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c35a0d-70b4-453d-974a-85b638505280-combined-ca-bundle\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.952574 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq5xb\" (UniqueName: \"kubernetes.io/projected/00c35a0d-70b4-453d-974a-85b638505280-kube-api-access-sq5xb\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.952593 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00c35a0d-70b4-453d-974a-85b638505280-config-data\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.952613 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00c35a0d-70b4-453d-974a-85b638505280-scripts\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.952636 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c35a0d-70b4-453d-974a-85b638505280-logs\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:09 crc kubenswrapper[4756]: I1203 11:14:09.952658 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00c35a0d-70b4-453d-974a-85b638505280-horizon-secret-key\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.062559 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq5xb\" (UniqueName: \"kubernetes.io/projected/00c35a0d-70b4-453d-974a-85b638505280-kube-api-access-sq5xb\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.062623 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00c35a0d-70b4-453d-974a-85b638505280-config-data\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.062653 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00c35a0d-70b4-453d-974a-85b638505280-scripts\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.062691 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c35a0d-70b4-453d-974a-85b638505280-logs\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.062732 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00c35a0d-70b4-453d-974a-85b638505280-horizon-secret-key\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.062804 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/00c35a0d-70b4-453d-974a-85b638505280-horizon-tls-certs\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.062978 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c35a0d-70b4-453d-974a-85b638505280-combined-ca-bundle\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.064083 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c35a0d-70b4-453d-974a-85b638505280-logs\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.064287 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00c35a0d-70b4-453d-974a-85b638505280-config-data\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.064544 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00c35a0d-70b4-453d-974a-85b638505280-scripts\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.068816 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00c35a0d-70b4-453d-974a-85b638505280-horizon-secret-key\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.070308 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/00c35a0d-70b4-453d-974a-85b638505280-horizon-tls-certs\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.071650 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c35a0d-70b4-453d-974a-85b638505280-combined-ca-bundle\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.091634 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq5xb\" (UniqueName: \"kubernetes.io/projected/00c35a0d-70b4-453d-974a-85b638505280-kube-api-access-sq5xb\") pod \"horizon-66bc647888-tcn4m\" (UID: \"00c35a0d-70b4-453d-974a-85b638505280\") " pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:10 crc kubenswrapper[4756]: I1203 11:14:10.246286 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:11 crc kubenswrapper[4756]: I1203 11:14:11.217588 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:11 crc kubenswrapper[4756]: I1203 11:14:11.306055 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ggh4n"] Dec 03 11:14:11 crc kubenswrapper[4756]: I1203 11:14:11.306772 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-ggh4n" podUID="40e70be6-c17e-4601-a018-1a708bb91d13" containerName="dnsmasq-dns" containerID="cri-o://ccbb4970afbf6abab6fc067eefefaa1e937614acd1e5e3d3f51aecfdefa79187" gracePeriod=10 Dec 03 11:14:12 crc kubenswrapper[4756]: I1203 11:14:12.216530 4756 generic.go:334] "Generic (PLEG): container finished" podID="40e70be6-c17e-4601-a018-1a708bb91d13" containerID="ccbb4970afbf6abab6fc067eefefaa1e937614acd1e5e3d3f51aecfdefa79187" exitCode=0 Dec 03 11:14:12 crc kubenswrapper[4756]: I1203 11:14:12.216584 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ggh4n" event={"ID":"40e70be6-c17e-4601-a018-1a708bb91d13","Type":"ContainerDied","Data":"ccbb4970afbf6abab6fc067eefefaa1e937614acd1e5e3d3f51aecfdefa79187"} Dec 03 11:14:12 crc kubenswrapper[4756]: I1203 11:14:12.220075 4756 generic.go:334] "Generic (PLEG): container finished" podID="c6c322e0-0115-4279-a8bd-2ad910f06ccd" containerID="49d48b50de161a7320a345f0e3295068f7200ceb79182a6f737c41f2039bedad" exitCode=0 Dec 03 11:14:12 crc kubenswrapper[4756]: I1203 11:14:12.220118 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ltws7" event={"ID":"c6c322e0-0115-4279-a8bd-2ad910f06ccd","Type":"ContainerDied","Data":"49d48b50de161a7320a345f0e3295068f7200ceb79182a6f737c41f2039bedad"} Dec 03 11:14:18 crc kubenswrapper[4756]: I1203 11:14:18.067976 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-ggh4n" podUID="40e70be6-c17e-4601-a018-1a708bb91d13" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Dec 03 11:14:19 crc kubenswrapper[4756]: I1203 11:14:19.294798 4756 generic.go:334] "Generic (PLEG): container finished" podID="07a714a5-c627-43b1-8bc1-85e157c25fb0" containerID="00ac1e2785ecdf6cd26b2d3b4c4a9c1203e04b4868e378b2521588c2baa189bb" exitCode=0 Dec 03 11:14:19 crc kubenswrapper[4756]: I1203 11:14:19.294880 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7bvmm" event={"ID":"07a714a5-c627-43b1-8bc1-85e157c25fb0","Type":"ContainerDied","Data":"00ac1e2785ecdf6cd26b2d3b4c4a9c1203e04b4868e378b2521588c2baa189bb"} Dec 03 11:14:21 crc kubenswrapper[4756]: E1203 11:14:21.623509 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 03 11:14:21 crc kubenswrapper[4756]: E1203 11:14:21.625472 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n695h558h7bh565h585hc4h5f9h577h7dh56ch555h578h99h87h648h674h545hc6h5bchcch56ch56fh678h64bh96hfbh665h5bbh99h574hb8h5fcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fd5c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-b64b795f7-zbjz6_openstack(40a7a9df-3b33-4670-a7e6-d6594e21a1f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:14:21 crc kubenswrapper[4756]: E1203 11:14:21.628258 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-b64b795f7-zbjz6" podUID="40a7a9df-3b33-4670-a7e6-d6594e21a1f6" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.070102 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-ggh4n" podUID="40e70be6-c17e-4601-a018-1a708bb91d13" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.632586 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.642548 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:23 crc kubenswrapper[4756]: E1203 11:14:23.716770 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Dec 03 11:14:23 crc kubenswrapper[4756]: E1203 11:14:23.717194 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jlh4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-lr48q_openstack(9ceb74cf-6023-4536-bc04-6667b5f48967): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:14:23 crc kubenswrapper[4756]: E1203 11:14:23.718482 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-lr48q" podUID="9ceb74cf-6023-4536-bc04-6667b5f48967" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.807073 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-config\") pod \"40e70be6-c17e-4601-a018-1a708bb91d13\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.807155 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-dns-svc\") pod \"40e70be6-c17e-4601-a018-1a708bb91d13\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.807254 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-ovsdbserver-nb\") pod \"40e70be6-c17e-4601-a018-1a708bb91d13\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.807281 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-config-data\") pod \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.807306 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k75sj\" (UniqueName: \"kubernetes.io/projected/c6c322e0-0115-4279-a8bd-2ad910f06ccd-kube-api-access-k75sj\") pod \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.807366 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-ovsdbserver-sb\") pod \"40e70be6-c17e-4601-a018-1a708bb91d13\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.807453 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-credential-keys\") pod \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.808289 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-fernet-keys\") pod \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.808348 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-combined-ca-bundle\") pod \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.808390 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-scripts\") pod \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\" (UID: \"c6c322e0-0115-4279-a8bd-2ad910f06ccd\") " Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.808468 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nplqr\" (UniqueName: \"kubernetes.io/projected/40e70be6-c17e-4601-a018-1a708bb91d13-kube-api-access-nplqr\") pod \"40e70be6-c17e-4601-a018-1a708bb91d13\" (UID: \"40e70be6-c17e-4601-a018-1a708bb91d13\") " Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.814912 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c6c322e0-0115-4279-a8bd-2ad910f06ccd" (UID: "c6c322e0-0115-4279-a8bd-2ad910f06ccd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.815651 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e70be6-c17e-4601-a018-1a708bb91d13-kube-api-access-nplqr" (OuterVolumeSpecName: "kube-api-access-nplqr") pod "40e70be6-c17e-4601-a018-1a708bb91d13" (UID: "40e70be6-c17e-4601-a018-1a708bb91d13"). InnerVolumeSpecName "kube-api-access-nplqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.817061 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c322e0-0115-4279-a8bd-2ad910f06ccd-kube-api-access-k75sj" (OuterVolumeSpecName: "kube-api-access-k75sj") pod "c6c322e0-0115-4279-a8bd-2ad910f06ccd" (UID: "c6c322e0-0115-4279-a8bd-2ad910f06ccd"). InnerVolumeSpecName "kube-api-access-k75sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.817661 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-scripts" (OuterVolumeSpecName: "scripts") pod "c6c322e0-0115-4279-a8bd-2ad910f06ccd" (UID: "c6c322e0-0115-4279-a8bd-2ad910f06ccd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.825182 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c6c322e0-0115-4279-a8bd-2ad910f06ccd" (UID: "c6c322e0-0115-4279-a8bd-2ad910f06ccd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.841522 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6c322e0-0115-4279-a8bd-2ad910f06ccd" (UID: "c6c322e0-0115-4279-a8bd-2ad910f06ccd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.850445 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-config-data" (OuterVolumeSpecName: "config-data") pod "c6c322e0-0115-4279-a8bd-2ad910f06ccd" (UID: "c6c322e0-0115-4279-a8bd-2ad910f06ccd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.864250 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40e70be6-c17e-4601-a018-1a708bb91d13" (UID: "40e70be6-c17e-4601-a018-1a708bb91d13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.869933 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-config" (OuterVolumeSpecName: "config") pod "40e70be6-c17e-4601-a018-1a708bb91d13" (UID: "40e70be6-c17e-4601-a018-1a708bb91d13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.872730 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "40e70be6-c17e-4601-a018-1a708bb91d13" (UID: "40e70be6-c17e-4601-a018-1a708bb91d13"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.885659 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "40e70be6-c17e-4601-a018-1a708bb91d13" (UID: "40e70be6-c17e-4601-a018-1a708bb91d13"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.911252 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.911298 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.911311 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k75sj\" (UniqueName: \"kubernetes.io/projected/c6c322e0-0115-4279-a8bd-2ad910f06ccd-kube-api-access-k75sj\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.911324 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.911333 4756 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.911341 4756 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.911350 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.911358 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c322e0-0115-4279-a8bd-2ad910f06ccd-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.911367 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nplqr\" (UniqueName: \"kubernetes.io/projected/40e70be6-c17e-4601-a018-1a708bb91d13-kube-api-access-nplqr\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.911378 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:23 crc kubenswrapper[4756]: I1203 11:14:23.911388 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40e70be6-c17e-4601-a018-1a708bb91d13-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:24 crc kubenswrapper[4756]: E1203 11:14:24.059468 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 03 11:14:24 crc kubenswrapper[4756]: E1203 11:14:24.059712 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncch695h689hd7h7fh55fh5c8h675hc9h5b7h5dbh5c8h66dh65h699h9ch547h4hf8hc4h6bh55ch687h586h58ch565h5c7h67dh55bh645h599hfcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqnm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7698f66497-h2r7v_openstack(f0d239b0-29b1-442f-894f-e2e64a8f1d08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:14:24 crc kubenswrapper[4756]: E1203 11:14:24.063806 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7698f66497-h2r7v" podUID="f0d239b0-29b1-442f-894f-e2e64a8f1d08" Dec 03 11:14:24 crc kubenswrapper[4756]: E1203 11:14:24.317839 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 03 11:14:24 crc kubenswrapper[4756]: E1203 11:14:24.318381 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68ch55bh544hc5h9bh5d6h5dhffh98h96hd4h677h554h589h5b7h5f7h65dh575h4h9ch7bh59ch66ch565h75h585h645h59h589h9hfbh5b9q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qnht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7fc99cbb77-f5wd5_openstack(f5ea5d85-45c9-42a2-ac7a-4ad4407c665f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:14:24 crc kubenswrapper[4756]: E1203 11:14:24.320938 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7fc99cbb77-f5wd5" podUID="f5ea5d85-45c9-42a2-ac7a-4ad4407c665f" Dec 03 11:14:24 crc kubenswrapper[4756]: E1203 11:14:24.330615 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 03 11:14:24 crc kubenswrapper[4756]: E1203 11:14:24.330812 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5ddh66h59h679h77h56dhcfh65bh5c5h87h58dh57fh5dch86h5ddhb6h6dh654h55h5fdh5fch5fdh594h5cfh64fhffh67ch575h575h9fh56dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crb5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(450e58a8-d0fc-4a72-9ad1-e7a7b7394d04): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:14:24 crc kubenswrapper[4756]: I1203 11:14:24.354844 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-ggh4n" event={"ID":"40e70be6-c17e-4601-a018-1a708bb91d13","Type":"ContainerDied","Data":"77fde13494b5fe89ec55061635bdb34f2aafa2d13ff3cd3b60dfa557ba44e058"} Dec 03 11:14:24 crc kubenswrapper[4756]: I1203 11:14:24.354915 4756 scope.go:117] "RemoveContainer" containerID="ccbb4970afbf6abab6fc067eefefaa1e937614acd1e5e3d3f51aecfdefa79187" Dec 03 11:14:24 crc kubenswrapper[4756]: I1203 11:14:24.355368 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-ggh4n" Dec 03 11:14:24 crc kubenswrapper[4756]: I1203 11:14:24.361531 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ltws7" Dec 03 11:14:24 crc kubenswrapper[4756]: I1203 11:14:24.362336 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ltws7" event={"ID":"c6c322e0-0115-4279-a8bd-2ad910f06ccd","Type":"ContainerDied","Data":"16817f639927f553e32414a330cc41721e60c98103ee9e810fab268ba35f4e22"} Dec 03 11:14:24 crc kubenswrapper[4756]: I1203 11:14:24.362406 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16817f639927f553e32414a330cc41721e60c98103ee9e810fab268ba35f4e22" Dec 03 11:14:24 crc kubenswrapper[4756]: E1203 11:14:24.367060 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-lr48q" podUID="9ceb74cf-6023-4536-bc04-6667b5f48967" Dec 03 11:14:24 crc kubenswrapper[4756]: I1203 11:14:24.453456 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ggh4n"] Dec 03 11:14:24 crc kubenswrapper[4756]: I1203 11:14:24.463882 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-ggh4n"] Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.762717 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ltws7"] Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.775522 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ltws7"] Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.871695 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-fxdjr"] Dec 03 11:14:25 crc kubenswrapper[4756]: E1203 11:14:24.872717 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e70be6-c17e-4601-a018-1a708bb91d13" containerName="init" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.872733 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e70be6-c17e-4601-a018-1a708bb91d13" containerName="init" Dec 03 11:14:25 crc kubenswrapper[4756]: E1203 11:14:24.872754 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c322e0-0115-4279-a8bd-2ad910f06ccd" containerName="keystone-bootstrap" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.872761 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c322e0-0115-4279-a8bd-2ad910f06ccd" containerName="keystone-bootstrap" Dec 03 11:14:25 crc kubenswrapper[4756]: E1203 11:14:24.872819 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e70be6-c17e-4601-a018-1a708bb91d13" containerName="dnsmasq-dns" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.872829 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e70be6-c17e-4601-a018-1a708bb91d13" containerName="dnsmasq-dns" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.873153 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e70be6-c17e-4601-a018-1a708bb91d13" containerName="dnsmasq-dns" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.873176 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c322e0-0115-4279-a8bd-2ad910f06ccd" containerName="keystone-bootstrap" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.874132 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.876857 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-b4cbg" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.877063 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.877244 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.877364 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.878818 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:24.884823 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fxdjr"] Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.042290 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-credential-keys\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.042400 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbxnv\" (UniqueName: \"kubernetes.io/projected/7887343b-04ca-42bf-b260-a2d02845676c-kube-api-access-rbxnv\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.042445 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-combined-ca-bundle\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.042471 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-scripts\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.042506 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-config-data\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.042805 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-fernet-keys\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.144897 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-credential-keys\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.145027 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbxnv\" (UniqueName: \"kubernetes.io/projected/7887343b-04ca-42bf-b260-a2d02845676c-kube-api-access-rbxnv\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.145062 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-combined-ca-bundle\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.145091 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-scripts\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.145138 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-config-data\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.145207 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-fernet-keys\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.152238 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-fernet-keys\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.152245 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-config-data\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.152771 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-combined-ca-bundle\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.162182 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-scripts\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.162515 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-credential-keys\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.164431 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbxnv\" (UniqueName: \"kubernetes.io/projected/7887343b-04ca-42bf-b260-a2d02845676c-kube-api-access-rbxnv\") pod \"keystone-bootstrap-fxdjr\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.214852 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.255503 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40e70be6-c17e-4601-a018-1a708bb91d13" path="/var/lib/kubelet/pods/40e70be6-c17e-4601-a018-1a708bb91d13/volumes" Dec 03 11:14:25 crc kubenswrapper[4756]: I1203 11:14:25.256786 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c322e0-0115-4279-a8bd-2ad910f06ccd" path="/var/lib/kubelet/pods/c6c322e0-0115-4279-a8bd-2ad910f06ccd/volumes" Dec 03 11:14:28 crc kubenswrapper[4756]: I1203 11:14:28.070925 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-ggh4n" podUID="40e70be6-c17e-4601-a018-1a708bb91d13" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.300414 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.314689 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7bvmm" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.321362 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.333618 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.350817 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-logs\") pod \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.350906 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwszb\" (UniqueName: \"kubernetes.io/projected/07a714a5-c627-43b1-8bc1-85e157c25fb0-kube-api-access-fwszb\") pod \"07a714a5-c627-43b1-8bc1-85e157c25fb0\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.350998 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-horizon-secret-key\") pod \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351234 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-config-data\") pod \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351260 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-horizon-secret-key\") pod \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351290 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0d239b0-29b1-442f-894f-e2e64a8f1d08-scripts\") pod \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351325 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-config-data\") pod \"07a714a5-c627-43b1-8bc1-85e157c25fb0\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351362 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0d239b0-29b1-442f-894f-e2e64a8f1d08-horizon-secret-key\") pod \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351384 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqnm7\" (UniqueName: \"kubernetes.io/projected/f0d239b0-29b1-442f-894f-e2e64a8f1d08-kube-api-access-pqnm7\") pod \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351415 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0d239b0-29b1-442f-894f-e2e64a8f1d08-logs\") pod \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351449 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd5c5\" (UniqueName: \"kubernetes.io/projected/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-kube-api-access-fd5c5\") pod \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351484 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qnht\" (UniqueName: \"kubernetes.io/projected/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-kube-api-access-2qnht\") pod \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351539 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-config-data\") pod \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351581 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0d239b0-29b1-442f-894f-e2e64a8f1d08-config-data\") pod \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\" (UID: \"f0d239b0-29b1-442f-894f-e2e64a8f1d08\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351613 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-scripts\") pod \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351663 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-combined-ca-bundle\") pod \"07a714a5-c627-43b1-8bc1-85e157c25fb0\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351704 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-scripts\") pod \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\" (UID: \"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351737 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-db-sync-config-data\") pod \"07a714a5-c627-43b1-8bc1-85e157c25fb0\" (UID: \"07a714a5-c627-43b1-8bc1-85e157c25fb0\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.351767 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-logs\") pod \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\" (UID: \"40a7a9df-3b33-4670-a7e6-d6594e21a1f6\") " Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.352438 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0d239b0-29b1-442f-894f-e2e64a8f1d08-logs" (OuterVolumeSpecName: "logs") pod "f0d239b0-29b1-442f-894f-e2e64a8f1d08" (UID: "f0d239b0-29b1-442f-894f-e2e64a8f1d08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.352628 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0d239b0-29b1-442f-894f-e2e64a8f1d08-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.352751 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-logs" (OuterVolumeSpecName: "logs") pod "40a7a9df-3b33-4670-a7e6-d6594e21a1f6" (UID: "40a7a9df-3b33-4670-a7e6-d6594e21a1f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.353362 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-scripts" (OuterVolumeSpecName: "scripts") pod "40a7a9df-3b33-4670-a7e6-d6594e21a1f6" (UID: "40a7a9df-3b33-4670-a7e6-d6594e21a1f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.353585 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d239b0-29b1-442f-894f-e2e64a8f1d08-config-data" (OuterVolumeSpecName: "config-data") pod "f0d239b0-29b1-442f-894f-e2e64a8f1d08" (UID: "f0d239b0-29b1-442f-894f-e2e64a8f1d08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.353653 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-config-data" (OuterVolumeSpecName: "config-data") pod "40a7a9df-3b33-4670-a7e6-d6594e21a1f6" (UID: "40a7a9df-3b33-4670-a7e6-d6594e21a1f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.354178 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d239b0-29b1-442f-894f-e2e64a8f1d08-scripts" (OuterVolumeSpecName: "scripts") pod "f0d239b0-29b1-442f-894f-e2e64a8f1d08" (UID: "f0d239b0-29b1-442f-894f-e2e64a8f1d08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.355616 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-logs" (OuterVolumeSpecName: "logs") pod "f5ea5d85-45c9-42a2-ac7a-4ad4407c665f" (UID: "f5ea5d85-45c9-42a2-ac7a-4ad4407c665f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.356050 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-config-data" (OuterVolumeSpecName: "config-data") pod "f5ea5d85-45c9-42a2-ac7a-4ad4407c665f" (UID: "f5ea5d85-45c9-42a2-ac7a-4ad4407c665f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.356753 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-scripts" (OuterVolumeSpecName: "scripts") pod "f5ea5d85-45c9-42a2-ac7a-4ad4407c665f" (UID: "f5ea5d85-45c9-42a2-ac7a-4ad4407c665f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.360755 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a714a5-c627-43b1-8bc1-85e157c25fb0-kube-api-access-fwszb" (OuterVolumeSpecName: "kube-api-access-fwszb") pod "07a714a5-c627-43b1-8bc1-85e157c25fb0" (UID: "07a714a5-c627-43b1-8bc1-85e157c25fb0"). InnerVolumeSpecName "kube-api-access-fwszb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.361643 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0d239b0-29b1-442f-894f-e2e64a8f1d08-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f0d239b0-29b1-442f-894f-e2e64a8f1d08" (UID: "f0d239b0-29b1-442f-894f-e2e64a8f1d08"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.361899 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-kube-api-access-fd5c5" (OuterVolumeSpecName: "kube-api-access-fd5c5") pod "40a7a9df-3b33-4670-a7e6-d6594e21a1f6" (UID: "40a7a9df-3b33-4670-a7e6-d6594e21a1f6"). InnerVolumeSpecName "kube-api-access-fd5c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.365031 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0d239b0-29b1-442f-894f-e2e64a8f1d08-kube-api-access-pqnm7" (OuterVolumeSpecName: "kube-api-access-pqnm7") pod "f0d239b0-29b1-442f-894f-e2e64a8f1d08" (UID: "f0d239b0-29b1-442f-894f-e2e64a8f1d08"). InnerVolumeSpecName "kube-api-access-pqnm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.384229 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f5ea5d85-45c9-42a2-ac7a-4ad4407c665f" (UID: "f5ea5d85-45c9-42a2-ac7a-4ad4407c665f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.385391 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "40a7a9df-3b33-4670-a7e6-d6594e21a1f6" (UID: "40a7a9df-3b33-4670-a7e6-d6594e21a1f6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.394220 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-kube-api-access-2qnht" (OuterVolumeSpecName: "kube-api-access-2qnht") pod "f5ea5d85-45c9-42a2-ac7a-4ad4407c665f" (UID: "f5ea5d85-45c9-42a2-ac7a-4ad4407c665f"). InnerVolumeSpecName "kube-api-access-2qnht". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.396822 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "07a714a5-c627-43b1-8bc1-85e157c25fb0" (UID: "07a714a5-c627-43b1-8bc1-85e157c25fb0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.444134 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07a714a5-c627-43b1-8bc1-85e157c25fb0" (UID: "07a714a5-c627-43b1-8bc1-85e157c25fb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.453990 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454031 4756 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454045 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454056 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454068 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwszb\" (UniqueName: \"kubernetes.io/projected/07a714a5-c627-43b1-8bc1-85e157c25fb0-kube-api-access-fwszb\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454081 4756 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454091 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454104 4756 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454114 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0d239b0-29b1-442f-894f-e2e64a8f1d08-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454123 4756 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0d239b0-29b1-442f-894f-e2e64a8f1d08-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454132 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqnm7\" (UniqueName: \"kubernetes.io/projected/f0d239b0-29b1-442f-894f-e2e64a8f1d08-kube-api-access-pqnm7\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454160 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd5c5\" (UniqueName: \"kubernetes.io/projected/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-kube-api-access-fd5c5\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454180 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qnht\" (UniqueName: \"kubernetes.io/projected/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-kube-api-access-2qnht\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454192 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454205 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0d239b0-29b1-442f-894f-e2e64a8f1d08-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454215 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40a7a9df-3b33-4670-a7e6-d6594e21a1f6-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.454223 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.459321 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7698f66497-h2r7v" event={"ID":"f0d239b0-29b1-442f-894f-e2e64a8f1d08","Type":"ContainerDied","Data":"f1fe92c259c03731e7a9a09f14d440e9d71e287d982df8905054a9da8bc66e69"} Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.459474 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7698f66497-h2r7v" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.465164 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fc99cbb77-f5wd5" event={"ID":"f5ea5d85-45c9-42a2-ac7a-4ad4407c665f","Type":"ContainerDied","Data":"1490b2ceff5077647aa96001dd5601e6b381ae28d4adf1ed834bd4ca3025ea39"} Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.465276 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fc99cbb77-f5wd5" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.475650 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b64b795f7-zbjz6" event={"ID":"40a7a9df-3b33-4670-a7e6-d6594e21a1f6","Type":"ContainerDied","Data":"26d89b80d9e1f388be4395e8f97aa7d90d72cea7036554f68f808c50904b232b"} Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.475784 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b64b795f7-zbjz6" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.478434 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7bvmm" event={"ID":"07a714a5-c627-43b1-8bc1-85e157c25fb0","Type":"ContainerDied","Data":"1c940324d5ba65523d9949f99a9553833966bb61da6fe3e541bd9061ab2e8af0"} Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.478479 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c940324d5ba65523d9949f99a9553833966bb61da6fe3e541bd9061ab2e8af0" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.478564 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7bvmm" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.478728 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-config-data" (OuterVolumeSpecName: "config-data") pod "07a714a5-c627-43b1-8bc1-85e157c25fb0" (UID: "07a714a5-c627-43b1-8bc1-85e157c25fb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.561405 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a714a5-c627-43b1-8bc1-85e157c25fb0-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.586712 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b64b795f7-zbjz6"] Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.597946 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-b64b795f7-zbjz6"] Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.617206 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7fc99cbb77-f5wd5"] Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.625363 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7fc99cbb77-f5wd5"] Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.656649 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7698f66497-h2r7v"] Dec 03 11:14:33 crc kubenswrapper[4756]: I1203 11:14:33.665163 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7698f66497-h2r7v"] Dec 03 11:14:34 crc kubenswrapper[4756]: I1203 11:14:34.495850 4756 generic.go:334] "Generic (PLEG): container finished" podID="aee66879-3546-4e2b-9737-ddb96741650f" containerID="c3ba64230151da45c102b293c7a8c1af0adf91a4a5ec8fd83d6fbdff8791ff39" exitCode=0 Dec 03 11:14:34 crc kubenswrapper[4756]: I1203 11:14:34.495938 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sxfpk" event={"ID":"aee66879-3546-4e2b-9737-ddb96741650f","Type":"ContainerDied","Data":"c3ba64230151da45c102b293c7a8c1af0adf91a4a5ec8fd83d6fbdff8791ff39"} Dec 03 11:14:34 crc kubenswrapper[4756]: I1203 11:14:34.549055 4756 scope.go:117] "RemoveContainer" containerID="4cd748b85a8c1d94491c3ac72c8133be1479faee175ecb994eb2131ba982ba37" Dec 03 11:14:34 crc kubenswrapper[4756]: E1203 11:14:34.592353 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 03 11:14:34 crc kubenswrapper[4756]: E1203 11:14:34.593422 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ps5lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7fh72_openstack(aa7e078c-dfed-40c1-ac1c-d9db28aa9d96): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:14:34 crc kubenswrapper[4756]: E1203 11:14:34.594873 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7fh72" podUID="aa7e078c-dfed-40c1-ac1c-d9db28aa9d96" Dec 03 11:14:34 crc kubenswrapper[4756]: I1203 11:14:34.902016 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gxs8p"] Dec 03 11:14:34 crc kubenswrapper[4756]: E1203 11:14:34.903082 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a714a5-c627-43b1-8bc1-85e157c25fb0" containerName="glance-db-sync" Dec 03 11:14:34 crc kubenswrapper[4756]: I1203 11:14:34.903098 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a714a5-c627-43b1-8bc1-85e157c25fb0" containerName="glance-db-sync" Dec 03 11:14:34 crc kubenswrapper[4756]: I1203 11:14:34.903311 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a714a5-c627-43b1-8bc1-85e157c25fb0" containerName="glance-db-sync" Dec 03 11:14:34 crc kubenswrapper[4756]: I1203 11:14:34.904765 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:34 crc kubenswrapper[4756]: I1203 11:14:34.934823 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gxs8p"] Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.007503 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.007592 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swwhc\" (UniqueName: \"kubernetes.io/projected/9d10622c-2e57-4502-9640-fd83a29a6d4f-kube-api-access-swwhc\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.007656 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.007712 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.007735 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.007764 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-config\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.110039 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.110149 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swwhc\" (UniqueName: \"kubernetes.io/projected/9d10622c-2e57-4502-9640-fd83a29a6d4f-kube-api-access-swwhc\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.110177 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.110233 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.110254 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.110285 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-config\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.112771 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-config\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.112992 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.114138 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.114217 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.115137 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.169030 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swwhc\" (UniqueName: \"kubernetes.io/projected/9d10622c-2e57-4502-9640-fd83a29a6d4f-kube-api-access-swwhc\") pod \"dnsmasq-dns-785d8bcb8c-gxs8p\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.210741 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f6f9857bb-rncmk"] Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.239435 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.250532 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40a7a9df-3b33-4670-a7e6-d6594e21a1f6" path="/var/lib/kubelet/pods/40a7a9df-3b33-4670-a7e6-d6594e21a1f6/volumes" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.251186 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0d239b0-29b1-442f-894f-e2e64a8f1d08" path="/var/lib/kubelet/pods/f0d239b0-29b1-442f-894f-e2e64a8f1d08/volumes" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.251721 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5ea5d85-45c9-42a2-ac7a-4ad4407c665f" path="/var/lib/kubelet/pods/f5ea5d85-45c9-42a2-ac7a-4ad4407c665f/volumes" Dec 03 11:14:35 crc kubenswrapper[4756]: E1203 11:14:35.624358 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7fh72" podUID="aa7e078c-dfed-40c1-ac1c-d9db28aa9d96" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.834104 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.842063 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.845671 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xp874" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.847443 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.846501 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 03 11:14:35 crc kubenswrapper[4756]: I1203 11:14:35.863293 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.034936 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5f0072-febc-4261-b06b-1a10ba6b3391-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.035600 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg4kl\" (UniqueName: \"kubernetes.io/projected/3a5f0072-febc-4261-b06b-1a10ba6b3391-kube-api-access-mg4kl\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.036944 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.037040 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-scripts\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.037187 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-config-data\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.037211 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.037254 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5f0072-febc-4261-b06b-1a10ba6b3391-logs\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.065371 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sxfpk" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.072477 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66bc647888-tcn4m"] Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.092319 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:14:36 crc kubenswrapper[4756]: E1203 11:14:36.093005 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee66879-3546-4e2b-9737-ddb96741650f" containerName="neutron-db-sync" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.093198 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee66879-3546-4e2b-9737-ddb96741650f" containerName="neutron-db-sync" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.094631 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee66879-3546-4e2b-9737-ddb96741650f" containerName="neutron-db-sync" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.095800 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.099059 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.105076 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.141504 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.141560 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-scripts\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.141608 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-config-data\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.141628 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.141650 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5f0072-febc-4261-b06b-1a10ba6b3391-logs\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.141699 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5f0072-febc-4261-b06b-1a10ba6b3391-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.141747 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg4kl\" (UniqueName: \"kubernetes.io/projected/3a5f0072-febc-4261-b06b-1a10ba6b3391-kube-api-access-mg4kl\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.151342 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5f0072-febc-4261-b06b-1a10ba6b3391-logs\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.151775 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.151983 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5f0072-febc-4261-b06b-1a10ba6b3391-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.170256 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.171189 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg4kl\" (UniqueName: \"kubernetes.io/projected/3a5f0072-febc-4261-b06b-1a10ba6b3391-kube-api-access-mg4kl\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.172322 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-config-data\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.189145 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-scripts\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.242853 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aee66879-3546-4e2b-9737-ddb96741650f-config\") pod \"aee66879-3546-4e2b-9737-ddb96741650f\" (UID: \"aee66879-3546-4e2b-9737-ddb96741650f\") " Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.242947 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx8fm\" (UniqueName: \"kubernetes.io/projected/aee66879-3546-4e2b-9737-ddb96741650f-kube-api-access-xx8fm\") pod \"aee66879-3546-4e2b-9737-ddb96741650f\" (UID: \"aee66879-3546-4e2b-9737-ddb96741650f\") " Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.243010 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee66879-3546-4e2b-9737-ddb96741650f-combined-ca-bundle\") pod \"aee66879-3546-4e2b-9737-ddb96741650f\" (UID: \"aee66879-3546-4e2b-9737-ddb96741650f\") " Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.243370 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.243408 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.243447 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.243552 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f23c1d8d-4584-4a9e-b51f-fe45687ded59-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.243599 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.243620 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f23c1d8d-4584-4a9e-b51f-fe45687ded59-logs\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.243642 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9pcd\" (UniqueName: \"kubernetes.io/projected/f23c1d8d-4584-4a9e-b51f-fe45687ded59-kube-api-access-c9pcd\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.247098 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.252996 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aee66879-3546-4e2b-9737-ddb96741650f-kube-api-access-xx8fm" (OuterVolumeSpecName: "kube-api-access-xx8fm") pod "aee66879-3546-4e2b-9737-ddb96741650f" (UID: "aee66879-3546-4e2b-9737-ddb96741650f"). InnerVolumeSpecName "kube-api-access-xx8fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.256994 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.285888 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gxs8p"] Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.292682 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee66879-3546-4e2b-9737-ddb96741650f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aee66879-3546-4e2b-9737-ddb96741650f" (UID: "aee66879-3546-4e2b-9737-ddb96741650f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.295578 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fxdjr"] Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.318173 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee66879-3546-4e2b-9737-ddb96741650f-config" (OuterVolumeSpecName: "config") pod "aee66879-3546-4e2b-9737-ddb96741650f" (UID: "aee66879-3546-4e2b-9737-ddb96741650f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.346636 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.347003 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.347206 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.348672 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f23c1d8d-4584-4a9e-b51f-fe45687ded59-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.353502 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.353981 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f23c1d8d-4584-4a9e-b51f-fe45687ded59-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.354228 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.355447 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f23c1d8d-4584-4a9e-b51f-fe45687ded59-logs\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.355559 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9pcd\" (UniqueName: \"kubernetes.io/projected/f23c1d8d-4584-4a9e-b51f-fe45687ded59-kube-api-access-c9pcd\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.356974 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/aee66879-3546-4e2b-9737-ddb96741650f-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.357011 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx8fm\" (UniqueName: \"kubernetes.io/projected/aee66879-3546-4e2b-9737-ddb96741650f-kube-api-access-xx8fm\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.357026 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee66879-3546-4e2b-9737-ddb96741650f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.360578 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f23c1d8d-4584-4a9e-b51f-fe45687ded59-logs\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.370444 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.370511 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.377464 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.390644 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9pcd\" (UniqueName: \"kubernetes.io/projected/f23c1d8d-4584-4a9e-b51f-fe45687ded59-kube-api-access-c9pcd\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.421414 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.563892 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04","Type":"ContainerStarted","Data":"c214dc8518b11d04534ee969b0aba8a1acc2eda4665e7c78b51cb50102b8e306"} Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.566155 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qvbgz" event={"ID":"818fc868-fe09-4a91-aab2-91f11bac7386","Type":"ContainerStarted","Data":"340472efc057804b16a85d014e55b8a5962a0b566d9e5ea60c23b8d325b23dea"} Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.587346 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" event={"ID":"9d10622c-2e57-4502-9640-fd83a29a6d4f","Type":"ContainerStarted","Data":"5f98c440353e23dc7612ec1173bb2fde3095e7316411a656e12603cb0d76bdc3"} Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.591714 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sxfpk" event={"ID":"aee66879-3546-4e2b-9737-ddb96741650f","Type":"ContainerDied","Data":"08605044c2eb71093900678ac7ed97a8f14073dbc8aa940340b2f6dc75cd7298"} Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.591823 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08605044c2eb71093900678ac7ed97a8f14073dbc8aa940340b2f6dc75cd7298" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.591884 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sxfpk" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.593149 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-qvbgz" podStartSLOduration=4.401507237 podStartE2EDuration="36.593126786s" podCreationTimestamp="2025-12-03 11:14:00 +0000 UTC" firstStartedPulling="2025-12-03 11:14:02.346178004 +0000 UTC m=+1253.376179248" lastFinishedPulling="2025-12-03 11:14:34.537797553 +0000 UTC m=+1285.567798797" observedRunningTime="2025-12-03 11:14:36.587399708 +0000 UTC m=+1287.617400952" watchObservedRunningTime="2025-12-03 11:14:36.593126786 +0000 UTC m=+1287.623128030" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.614977 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fxdjr" event={"ID":"7887343b-04ca-42bf-b260-a2d02845676c","Type":"ContainerStarted","Data":"51113f8eb6f3e5c77682efe2d44dfe6c135212bbf5ff9bde9898933a987a69f9"} Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.617886 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bc647888-tcn4m" event={"ID":"00c35a0d-70b4-453d-974a-85b638505280","Type":"ContainerStarted","Data":"0366bdffee399937fa9964184b715a823c713d25e0e950a8e1bf27e2fd2b2d91"} Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.623201 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f6f9857bb-rncmk" event={"ID":"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3","Type":"ContainerStarted","Data":"c653d5bf2c3fc4688bf9de690d9cb7282a458344e3d96b4a15f3ba05f2b6ecb5"} Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.630839 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f6f9857bb-rncmk" event={"ID":"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3","Type":"ContainerStarted","Data":"29b135dc5193c36bf4d0d3801e8728a8bb216c6c9f4bf31bda97e2079a723df2"} Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.726742 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.836739 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gxs8p"] Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.893016 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7f7c586c7d-n86cw"] Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.896626 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.900732 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.901014 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nmqwq" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.901165 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.901184 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.921612 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-6kdvr"] Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.923472 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.951031 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f7c586c7d-n86cw"] Dec 03 11:14:36 crc kubenswrapper[4756]: I1203 11:14:36.990250 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-6kdvr"] Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.078914 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqz9l\" (UniqueName: \"kubernetes.io/projected/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-kube-api-access-wqz9l\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.079634 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-ovndb-tls-certs\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.079774 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.079895 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-config\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.080255 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.080499 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-httpd-config\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.080914 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-combined-ca-bundle\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.081102 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-config\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.081215 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-dns-svc\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.081304 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.081393 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg6nc\" (UniqueName: \"kubernetes.io/projected/2990fd3a-f317-4628-a783-6238277ff18f-kube-api-access-lg6nc\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.183341 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-config\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.183822 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-dns-svc\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.183880 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.183899 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg6nc\" (UniqueName: \"kubernetes.io/projected/2990fd3a-f317-4628-a783-6238277ff18f-kube-api-access-lg6nc\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.183976 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqz9l\" (UniqueName: \"kubernetes.io/projected/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-kube-api-access-wqz9l\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.184005 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-ovndb-tls-certs\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.184030 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.184057 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-config\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.184091 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.184109 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-httpd-config\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.184183 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-combined-ca-bundle\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.185123 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.185875 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.186202 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.186208 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-dns-svc\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.186902 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-config\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.192751 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-config\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.196336 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-ovndb-tls-certs\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.197212 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-combined-ca-bundle\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.214131 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-httpd-config\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.225408 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg6nc\" (UniqueName: \"kubernetes.io/projected/2990fd3a-f317-4628-a783-6238277ff18f-kube-api-access-lg6nc\") pod \"neutron-7f7c586c7d-n86cw\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.236410 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqz9l\" (UniqueName: \"kubernetes.io/projected/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-kube-api-access-wqz9l\") pod \"dnsmasq-dns-55f844cf75-6kdvr\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.300766 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.301760 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.356231 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.722725 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.738907 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3a5f0072-febc-4261-b06b-1a10ba6b3391","Type":"ContainerStarted","Data":"b479a24ed5670df93c7f1f1a1a68c879e4a7a6d5ba27c656d36294ddd0481835"} Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.748473 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f6f9857bb-rncmk" event={"ID":"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3","Type":"ContainerStarted","Data":"7ce0400b24e755e6d3cb0e45b9b47395b1f68edbb781cc9da8036fd050990808"} Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.766825 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fxdjr" event={"ID":"7887343b-04ca-42bf-b260-a2d02845676c","Type":"ContainerStarted","Data":"42f20f45461cdc0631f64c06de988d4e72a079e9a399f06bd35c3338fbe986af"} Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.811125 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5f6f9857bb-rncmk" podStartSLOduration=28.260812852 podStartE2EDuration="28.811091514s" podCreationTimestamp="2025-12-03 11:14:09 +0000 UTC" firstStartedPulling="2025-12-03 11:14:35.612548264 +0000 UTC m=+1286.642549508" lastFinishedPulling="2025-12-03 11:14:36.162826926 +0000 UTC m=+1287.192828170" observedRunningTime="2025-12-03 11:14:37.78813718 +0000 UTC m=+1288.818138424" watchObservedRunningTime="2025-12-03 11:14:37.811091514 +0000 UTC m=+1288.841092758" Dec 03 11:14:37 crc kubenswrapper[4756]: W1203 11:14:37.854216 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf23c1d8d_4584_4a9e_b51f_fe45687ded59.slice/crio-b12af819660296088994282388b31aca03cee4b85a23265cc4bc6e2ba71c68ac WatchSource:0}: Error finding container b12af819660296088994282388b31aca03cee4b85a23265cc4bc6e2ba71c68ac: Status 404 returned error can't find the container with id b12af819660296088994282388b31aca03cee4b85a23265cc4bc6e2ba71c68ac Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.854408 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bc647888-tcn4m" event={"ID":"00c35a0d-70b4-453d-974a-85b638505280","Type":"ContainerStarted","Data":"4cdc1ce8195e0254eaef0184504cdcff99c4171ffecdcefc02fa034c8dc09000"} Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.875027 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-fxdjr" podStartSLOduration=13.874996952 podStartE2EDuration="13.874996952s" podCreationTimestamp="2025-12-03 11:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:37.813684565 +0000 UTC m=+1288.843685819" watchObservedRunningTime="2025-12-03 11:14:37.874996952 +0000 UTC m=+1288.904998196" Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.908573 4756 generic.go:334] "Generic (PLEG): container finished" podID="9d10622c-2e57-4502-9640-fd83a29a6d4f" containerID="45c475083b0241d314ae602c664a7e2c1277aba53a8ff61aac7252fa6e019742" exitCode=0 Dec 03 11:14:37 crc kubenswrapper[4756]: I1203 11:14:37.909132 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" event={"ID":"9d10622c-2e57-4502-9640-fd83a29a6d4f","Type":"ContainerDied","Data":"45c475083b0241d314ae602c664a7e2c1277aba53a8ff61aac7252fa6e019742"} Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.191943 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f7c586c7d-n86cw"] Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.791867 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.804614 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-6kdvr"] Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.847110 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-ovsdbserver-sb\") pod \"9d10622c-2e57-4502-9640-fd83a29a6d4f\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.847200 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-ovsdbserver-nb\") pod \"9d10622c-2e57-4502-9640-fd83a29a6d4f\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.847246 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-config\") pod \"9d10622c-2e57-4502-9640-fd83a29a6d4f\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.847525 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swwhc\" (UniqueName: \"kubernetes.io/projected/9d10622c-2e57-4502-9640-fd83a29a6d4f-kube-api-access-swwhc\") pod \"9d10622c-2e57-4502-9640-fd83a29a6d4f\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.847666 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-dns-svc\") pod \"9d10622c-2e57-4502-9640-fd83a29a6d4f\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.847706 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-dns-swift-storage-0\") pod \"9d10622c-2e57-4502-9640-fd83a29a6d4f\" (UID: \"9d10622c-2e57-4502-9640-fd83a29a6d4f\") " Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.908348 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d10622c-2e57-4502-9640-fd83a29a6d4f-kube-api-access-swwhc" (OuterVolumeSpecName: "kube-api-access-swwhc") pod "9d10622c-2e57-4502-9640-fd83a29a6d4f" (UID: "9d10622c-2e57-4502-9640-fd83a29a6d4f"). InnerVolumeSpecName "kube-api-access-swwhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.947241 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bc647888-tcn4m" event={"ID":"00c35a0d-70b4-453d-974a-85b638505280","Type":"ContainerStarted","Data":"7b757dbbd271bf889135a4e68662a58696ae5a3097ea2a472cbfee50d00094ed"} Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.962690 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swwhc\" (UniqueName: \"kubernetes.io/projected/9d10622c-2e57-4502-9640-fd83a29a6d4f-kube-api-access-swwhc\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.965267 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f7c586c7d-n86cw" event={"ID":"2990fd3a-f317-4628-a783-6238277ff18f","Type":"ContainerStarted","Data":"9e0abfc8042f51565ab8ee44882361f5cb3729cdf5bf22ead02df47e49b117bf"} Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.970472 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" event={"ID":"9eadf119-7f74-4248-bd3f-1eabc2cdbf92","Type":"ContainerStarted","Data":"0b1bd800190fbe8a3f2d9eb8a089797afae6f6612dbcd13bf12726aa06ee4b52"} Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.991891 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" event={"ID":"9d10622c-2e57-4502-9640-fd83a29a6d4f","Type":"ContainerDied","Data":"5f98c440353e23dc7612ec1173bb2fde3095e7316411a656e12603cb0d76bdc3"} Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.991980 4756 scope.go:117] "RemoveContainer" containerID="45c475083b0241d314ae602c664a7e2c1277aba53a8ff61aac7252fa6e019742" Dec 03 11:14:38 crc kubenswrapper[4756]: I1203 11:14:38.992173 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gxs8p" Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.014106 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f23c1d8d-4584-4a9e-b51f-fe45687ded59","Type":"ContainerStarted","Data":"b12af819660296088994282388b31aca03cee4b85a23265cc4bc6e2ba71c68ac"} Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.016020 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66bc647888-tcn4m" podStartSLOduration=29.430852388 podStartE2EDuration="30.015984845s" podCreationTimestamp="2025-12-03 11:14:09 +0000 UTC" firstStartedPulling="2025-12-03 11:14:36.091806607 +0000 UTC m=+1287.121807851" lastFinishedPulling="2025-12-03 11:14:36.676939064 +0000 UTC m=+1287.706940308" observedRunningTime="2025-12-03 11:14:38.991612327 +0000 UTC m=+1290.021613571" watchObservedRunningTime="2025-12-03 11:14:39.015984845 +0000 UTC m=+1290.045986079" Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.109484 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d10622c-2e57-4502-9640-fd83a29a6d4f" (UID: "9d10622c-2e57-4502-9640-fd83a29a6d4f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.122022 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.172530 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.211149 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.230600 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d10622c-2e57-4502-9640-fd83a29a6d4f" (UID: "9d10622c-2e57-4502-9640-fd83a29a6d4f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.260310 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9d10622c-2e57-4502-9640-fd83a29a6d4f" (UID: "9d10622c-2e57-4502-9640-fd83a29a6d4f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.271760 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d10622c-2e57-4502-9640-fd83a29a6d4f" (UID: "9d10622c-2e57-4502-9640-fd83a29a6d4f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.274717 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.274773 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.274790 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.307708 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-config" (OuterVolumeSpecName: "config") pod "9d10622c-2e57-4502-9640-fd83a29a6d4f" (UID: "9d10622c-2e57-4502-9640-fd83a29a6d4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.377293 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d10622c-2e57-4502-9640-fd83a29a6d4f-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.928575 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.929101 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:14:39 crc kubenswrapper[4756]: E1203 11:14:39.947648 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d10622c_2e57_4502_9640_fd83a29a6d4f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d10622c_2e57_4502_9640_fd83a29a6d4f.slice/crio-5f98c440353e23dc7612ec1173bb2fde3095e7316411a656e12603cb0d76bdc3\": RecentStats: unable to find data in memory cache]" Dec 03 11:14:39 crc kubenswrapper[4756]: I1203 11:14:39.993692 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gxs8p"] Dec 03 11:14:40 crc kubenswrapper[4756]: I1203 11:14:40.011148 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gxs8p"] Dec 03 11:14:40 crc kubenswrapper[4756]: I1203 11:14:40.054275 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f7c586c7d-n86cw" event={"ID":"2990fd3a-f317-4628-a783-6238277ff18f","Type":"ContainerStarted","Data":"475abb4eb0d1a6febb4c5ad6c0005153650ab5be29d6fe73473ba22868e9fd4d"} Dec 03 11:14:40 crc kubenswrapper[4756]: I1203 11:14:40.076698 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3a5f0072-febc-4261-b06b-1a10ba6b3391","Type":"ContainerStarted","Data":"b3a966ac7cbf677e19902d18b0058cab669e59603d9d6ed8d886604b6dc55f40"} Dec 03 11:14:40 crc kubenswrapper[4756]: I1203 11:14:40.248667 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:40 crc kubenswrapper[4756]: I1203 11:14:40.249282 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.161984 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-745d87f76f-mbrkc"] Dec 03 11:14:41 crc kubenswrapper[4756]: E1203 11:14:41.163019 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d10622c-2e57-4502-9640-fd83a29a6d4f" containerName="init" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.163040 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d10622c-2e57-4502-9640-fd83a29a6d4f" containerName="init" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.163280 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d10622c-2e57-4502-9640-fd83a29a6d4f" containerName="init" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.166021 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.168883 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.170981 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.203379 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-745d87f76f-mbrkc"] Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.263280 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d10622c-2e57-4502-9640-fd83a29a6d4f" path="/var/lib/kubelet/pods/9d10622c-2e57-4502-9640-fd83a29a6d4f/volumes" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.344233 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-internal-tls-certs\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.344311 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-httpd-config\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.344383 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-config\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.344428 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gss59\" (UniqueName: \"kubernetes.io/projected/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-kube-api-access-gss59\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.344923 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-ovndb-tls-certs\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.345160 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-combined-ca-bundle\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.345282 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-public-tls-certs\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.449511 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-ovndb-tls-certs\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.450453 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-combined-ca-bundle\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.450487 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-public-tls-certs\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.450580 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-internal-tls-certs\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.450615 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-httpd-config\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.450636 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-config\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.450654 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gss59\" (UniqueName: \"kubernetes.io/projected/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-kube-api-access-gss59\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.459502 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-ovndb-tls-certs\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.463789 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-public-tls-certs\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.464459 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-httpd-config\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.464650 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-combined-ca-bundle\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.467059 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-internal-tls-certs\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.473711 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gss59\" (UniqueName: \"kubernetes.io/projected/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-kube-api-access-gss59\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.474177 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4234afcf-96f0-4340-b5c0-e1aac6c4dacb-config\") pod \"neutron-745d87f76f-mbrkc\" (UID: \"4234afcf-96f0-4340-b5c0-e1aac6c4dacb\") " pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:41 crc kubenswrapper[4756]: I1203 11:14:41.495712 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:42 crc kubenswrapper[4756]: I1203 11:14:42.132154 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f23c1d8d-4584-4a9e-b51f-fe45687ded59","Type":"ContainerStarted","Data":"21abbddc77811ab7b29eb39ef2f3405138eb655ee0981a915c15b7d612096f42"} Dec 03 11:14:42 crc kubenswrapper[4756]: I1203 11:14:42.144751 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f7c586c7d-n86cw" event={"ID":"2990fd3a-f317-4628-a783-6238277ff18f","Type":"ContainerStarted","Data":"1175dbb598e852115e35598807fd73e576ffcd00994e82e04614502f48c80feb"} Dec 03 11:14:42 crc kubenswrapper[4756]: I1203 11:14:42.146515 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:14:42 crc kubenswrapper[4756]: I1203 11:14:42.151534 4756 generic.go:334] "Generic (PLEG): container finished" podID="9eadf119-7f74-4248-bd3f-1eabc2cdbf92" containerID="2db6822e9c6bca34d936fb43bf28c3c1b2e13468ff8da6ab4a5c8c26e65647d6" exitCode=0 Dec 03 11:14:42 crc kubenswrapper[4756]: I1203 11:14:42.151606 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" event={"ID":"9eadf119-7f74-4248-bd3f-1eabc2cdbf92","Type":"ContainerDied","Data":"2db6822e9c6bca34d936fb43bf28c3c1b2e13468ff8da6ab4a5c8c26e65647d6"} Dec 03 11:14:42 crc kubenswrapper[4756]: I1203 11:14:42.155729 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3a5f0072-febc-4261-b06b-1a10ba6b3391","Type":"ContainerStarted","Data":"b150ef69ebeb0df56329756301c54ab55c8f91cbbd17da2b3dbdd3d52f97008a"} Dec 03 11:14:42 crc kubenswrapper[4756]: I1203 11:14:42.155791 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3a5f0072-febc-4261-b06b-1a10ba6b3391" containerName="glance-log" containerID="cri-o://b3a966ac7cbf677e19902d18b0058cab669e59603d9d6ed8d886604b6dc55f40" gracePeriod=30 Dec 03 11:14:42 crc kubenswrapper[4756]: I1203 11:14:42.155875 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3a5f0072-febc-4261-b06b-1a10ba6b3391" containerName="glance-httpd" containerID="cri-o://b150ef69ebeb0df56329756301c54ab55c8f91cbbd17da2b3dbdd3d52f97008a" gracePeriod=30 Dec 03 11:14:42 crc kubenswrapper[4756]: I1203 11:14:42.159718 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lr48q" event={"ID":"9ceb74cf-6023-4536-bc04-6667b5f48967","Type":"ContainerStarted","Data":"abbe059b6ecbdb73f056cec8a058014916bca8486c8009fd1dcedcac12a9fa25"} Dec 03 11:14:42 crc kubenswrapper[4756]: I1203 11:14:42.179258 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7f7c586c7d-n86cw" podStartSLOduration=6.179226291 podStartE2EDuration="6.179226291s" podCreationTimestamp="2025-12-03 11:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:42.173655358 +0000 UTC m=+1293.203656612" watchObservedRunningTime="2025-12-03 11:14:42.179226291 +0000 UTC m=+1293.209227535" Dec 03 11:14:42 crc kubenswrapper[4756]: I1203 11:14:42.263331 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.263129422 podStartE2EDuration="8.263129422s" podCreationTimestamp="2025-12-03 11:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:42.209732271 +0000 UTC m=+1293.239733515" watchObservedRunningTime="2025-12-03 11:14:42.263129422 +0000 UTC m=+1293.293130666" Dec 03 11:14:42 crc kubenswrapper[4756]: I1203 11:14:42.308918 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-lr48q" podStartSLOduration=5.623578503 podStartE2EDuration="42.308886856s" podCreationTimestamp="2025-12-03 11:14:00 +0000 UTC" firstStartedPulling="2025-12-03 11:14:02.648858953 +0000 UTC m=+1253.678860197" lastFinishedPulling="2025-12-03 11:14:39.334167306 +0000 UTC m=+1290.364168550" observedRunningTime="2025-12-03 11:14:42.2270625 +0000 UTC m=+1293.257063744" watchObservedRunningTime="2025-12-03 11:14:42.308886856 +0000 UTC m=+1293.338888100" Dec 03 11:14:42 crc kubenswrapper[4756]: I1203 11:14:42.419321 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-745d87f76f-mbrkc"] Dec 03 11:14:42 crc kubenswrapper[4756]: W1203 11:14:42.441397 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4234afcf_96f0_4340_b5c0_e1aac6c4dacb.slice/crio-b73232977682cb4114e2393a1728c1e4122755b51a999f8ec7e313b930da8743 WatchSource:0}: Error finding container b73232977682cb4114e2393a1728c1e4122755b51a999f8ec7e313b930da8743: Status 404 returned error can't find the container with id b73232977682cb4114e2393a1728c1e4122755b51a999f8ec7e313b930da8743 Dec 03 11:14:43 crc kubenswrapper[4756]: I1203 11:14:43.195394 4756 generic.go:334] "Generic (PLEG): container finished" podID="3a5f0072-febc-4261-b06b-1a10ba6b3391" containerID="b150ef69ebeb0df56329756301c54ab55c8f91cbbd17da2b3dbdd3d52f97008a" exitCode=0 Dec 03 11:14:43 crc kubenswrapper[4756]: I1203 11:14:43.196341 4756 generic.go:334] "Generic (PLEG): container finished" podID="3a5f0072-febc-4261-b06b-1a10ba6b3391" containerID="b3a966ac7cbf677e19902d18b0058cab669e59603d9d6ed8d886604b6dc55f40" exitCode=143 Dec 03 11:14:43 crc kubenswrapper[4756]: I1203 11:14:43.196019 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3a5f0072-febc-4261-b06b-1a10ba6b3391","Type":"ContainerDied","Data":"b150ef69ebeb0df56329756301c54ab55c8f91cbbd17da2b3dbdd3d52f97008a"} Dec 03 11:14:43 crc kubenswrapper[4756]: I1203 11:14:43.196480 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3a5f0072-febc-4261-b06b-1a10ba6b3391","Type":"ContainerDied","Data":"b3a966ac7cbf677e19902d18b0058cab669e59603d9d6ed8d886604b6dc55f40"} Dec 03 11:14:43 crc kubenswrapper[4756]: I1203 11:14:43.234292 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f23c1d8d-4584-4a9e-b51f-fe45687ded59" containerName="glance-log" containerID="cri-o://21abbddc77811ab7b29eb39ef2f3405138eb655ee0981a915c15b7d612096f42" gracePeriod=30 Dec 03 11:14:43 crc kubenswrapper[4756]: I1203 11:14:43.235014 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f23c1d8d-4584-4a9e-b51f-fe45687ded59" containerName="glance-httpd" containerID="cri-o://b85dbf6a936ef3511edefafddcb6dfda44753db77af7a4a9b96651f2d423d9eb" gracePeriod=30 Dec 03 11:14:43 crc kubenswrapper[4756]: I1203 11:14:43.261588 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f23c1d8d-4584-4a9e-b51f-fe45687ded59","Type":"ContainerStarted","Data":"b85dbf6a936ef3511edefafddcb6dfda44753db77af7a4a9b96651f2d423d9eb"} Dec 03 11:14:43 crc kubenswrapper[4756]: I1203 11:14:43.261641 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-745d87f76f-mbrkc" event={"ID":"4234afcf-96f0-4340-b5c0-e1aac6c4dacb","Type":"ContainerStarted","Data":"165fbd89883df97adec366faa92cf3d89ec67d2d281be72ea5ad8c88e76b22a6"} Dec 03 11:14:43 crc kubenswrapper[4756]: I1203 11:14:43.261661 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-745d87f76f-mbrkc" event={"ID":"4234afcf-96f0-4340-b5c0-e1aac6c4dacb","Type":"ContainerStarted","Data":"b73232977682cb4114e2393a1728c1e4122755b51a999f8ec7e313b930da8743"} Dec 03 11:14:43 crc kubenswrapper[4756]: I1203 11:14:43.275851 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" event={"ID":"9eadf119-7f74-4248-bd3f-1eabc2cdbf92","Type":"ContainerStarted","Data":"a9a5e4493e4bb0868fea47f3040c94de7637bcc857abf28583f64b587b8a495e"} Dec 03 11:14:43 crc kubenswrapper[4756]: I1203 11:14:43.275995 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:43 crc kubenswrapper[4756]: I1203 11:14:43.283169 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.282551152 podStartE2EDuration="8.282551152s" podCreationTimestamp="2025-12-03 11:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:43.265063909 +0000 UTC m=+1294.295065163" watchObservedRunningTime="2025-12-03 11:14:43.282551152 +0000 UTC m=+1294.312552396" Dec 03 11:14:44 crc kubenswrapper[4756]: I1203 11:14:44.287007 4756 generic.go:334] "Generic (PLEG): container finished" podID="f23c1d8d-4584-4a9e-b51f-fe45687ded59" containerID="b85dbf6a936ef3511edefafddcb6dfda44753db77af7a4a9b96651f2d423d9eb" exitCode=143 Dec 03 11:14:44 crc kubenswrapper[4756]: I1203 11:14:44.287471 4756 generic.go:334] "Generic (PLEG): container finished" podID="f23c1d8d-4584-4a9e-b51f-fe45687ded59" containerID="21abbddc77811ab7b29eb39ef2f3405138eb655ee0981a915c15b7d612096f42" exitCode=143 Dec 03 11:14:44 crc kubenswrapper[4756]: I1203 11:14:44.287180 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f23c1d8d-4584-4a9e-b51f-fe45687ded59","Type":"ContainerDied","Data":"b85dbf6a936ef3511edefafddcb6dfda44753db77af7a4a9b96651f2d423d9eb"} Dec 03 11:14:44 crc kubenswrapper[4756]: I1203 11:14:44.287841 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f23c1d8d-4584-4a9e-b51f-fe45687ded59","Type":"ContainerDied","Data":"21abbddc77811ab7b29eb39ef2f3405138eb655ee0981a915c15b7d612096f42"} Dec 03 11:14:45 crc kubenswrapper[4756]: I1203 11:14:45.300367 4756 generic.go:334] "Generic (PLEG): container finished" podID="818fc868-fe09-4a91-aab2-91f11bac7386" containerID="340472efc057804b16a85d014e55b8a5962a0b566d9e5ea60c23b8d325b23dea" exitCode=0 Dec 03 11:14:45 crc kubenswrapper[4756]: I1203 11:14:45.300485 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qvbgz" event={"ID":"818fc868-fe09-4a91-aab2-91f11bac7386","Type":"ContainerDied","Data":"340472efc057804b16a85d014e55b8a5962a0b566d9e5ea60c23b8d325b23dea"} Dec 03 11:14:45 crc kubenswrapper[4756]: I1203 11:14:45.304265 4756 generic.go:334] "Generic (PLEG): container finished" podID="7887343b-04ca-42bf-b260-a2d02845676c" containerID="42f20f45461cdc0631f64c06de988d4e72a079e9a399f06bd35c3338fbe986af" exitCode=0 Dec 03 11:14:45 crc kubenswrapper[4756]: I1203 11:14:45.304321 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fxdjr" event={"ID":"7887343b-04ca-42bf-b260-a2d02845676c","Type":"ContainerDied","Data":"42f20f45461cdc0631f64c06de988d4e72a079e9a399f06bd35c3338fbe986af"} Dec 03 11:14:45 crc kubenswrapper[4756]: I1203 11:14:45.307509 4756 generic.go:334] "Generic (PLEG): container finished" podID="9ceb74cf-6023-4536-bc04-6667b5f48967" containerID="abbe059b6ecbdb73f056cec8a058014916bca8486c8009fd1dcedcac12a9fa25" exitCode=0 Dec 03 11:14:45 crc kubenswrapper[4756]: I1203 11:14:45.307545 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lr48q" event={"ID":"9ceb74cf-6023-4536-bc04-6667b5f48967","Type":"ContainerDied","Data":"abbe059b6ecbdb73f056cec8a058014916bca8486c8009fd1dcedcac12a9fa25"} Dec 03 11:14:45 crc kubenswrapper[4756]: I1203 11:14:45.331277 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" podStartSLOduration=9.33124245 podStartE2EDuration="9.33124245s" podCreationTimestamp="2025-12-03 11:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:43.3104482 +0000 UTC m=+1294.340449444" watchObservedRunningTime="2025-12-03 11:14:45.33124245 +0000 UTC m=+1296.361243694" Dec 03 11:14:47 crc kubenswrapper[4756]: I1203 11:14:47.359185 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:14:47 crc kubenswrapper[4756]: I1203 11:14:47.440768 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-4r422"] Dec 03 11:14:47 crc kubenswrapper[4756]: I1203 11:14:47.441128 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" podUID="cd94f14d-05e4-4d74-befd-c508535db058" containerName="dnsmasq-dns" containerID="cri-o://5c3bb3f106b58476397ce489e98dd51c8c5a2c085a983f84edfea1a1dcea6ab8" gracePeriod=10 Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.289758 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qvbgz" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.303300 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.363001 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.363802 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.364141 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3a5f0072-febc-4261-b06b-1a10ba6b3391","Type":"ContainerDied","Data":"b479a24ed5670df93c7f1f1a1a68c879e4a7a6d5ba27c656d36294ddd0481835"} Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.364182 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b479a24ed5670df93c7f1f1a1a68c879e4a7a6d5ba27c656d36294ddd0481835" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.372415 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lr48q" event={"ID":"9ceb74cf-6023-4536-bc04-6667b5f48967","Type":"ContainerDied","Data":"f6cd0655dc51beb3169af403853a47d1b05d0e98ef554ebca7fb811e862646ef"} Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.372474 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6cd0655dc51beb3169af403853a47d1b05d0e98ef554ebca7fb811e862646ef" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.372554 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lr48q" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.380574 4756 generic.go:334] "Generic (PLEG): container finished" podID="cd94f14d-05e4-4d74-befd-c508535db058" containerID="5c3bb3f106b58476397ce489e98dd51c8c5a2c085a983f84edfea1a1dcea6ab8" exitCode=0 Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.380645 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" event={"ID":"cd94f14d-05e4-4d74-befd-c508535db058","Type":"ContainerDied","Data":"5c3bb3f106b58476397ce489e98dd51c8c5a2c085a983f84edfea1a1dcea6ab8"} Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.385117 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qvbgz" event={"ID":"818fc868-fe09-4a91-aab2-91f11bac7386","Type":"ContainerDied","Data":"b0152bf8149cb719e30e54d2913d162b841b91dbf1b51132398b4b6fecfb3142"} Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.385152 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0152bf8149cb719e30e54d2913d162b841b91dbf1b51132398b4b6fecfb3142" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.385212 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qvbgz" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.407661 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fxdjr" event={"ID":"7887343b-04ca-42bf-b260-a2d02845676c","Type":"ContainerDied","Data":"51113f8eb6f3e5c77682efe2d44dfe6c135212bbf5ff9bde9898933a987a69f9"} Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.407737 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51113f8eb6f3e5c77682efe2d44dfe6c135212bbf5ff9bde9898933a987a69f9" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.407797 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fxdjr" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.460700 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg4kl\" (UniqueName: \"kubernetes.io/projected/3a5f0072-febc-4261-b06b-1a10ba6b3391-kube-api-access-mg4kl\") pod \"3a5f0072-febc-4261-b06b-1a10ba6b3391\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.460754 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-credential-keys\") pod \"7887343b-04ca-42bf-b260-a2d02845676c\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.460773 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-scripts\") pod \"3a5f0072-febc-4261-b06b-1a10ba6b3391\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.460800 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-combined-ca-bundle\") pod \"3a5f0072-febc-4261-b06b-1a10ba6b3391\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.460839 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlh4m\" (UniqueName: \"kubernetes.io/projected/9ceb74cf-6023-4536-bc04-6667b5f48967-kube-api-access-jlh4m\") pod \"9ceb74cf-6023-4536-bc04-6667b5f48967\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.460933 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-fernet-keys\") pod \"7887343b-04ca-42bf-b260-a2d02845676c\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.463058 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbxnv\" (UniqueName: \"kubernetes.io/projected/7887343b-04ca-42bf-b260-a2d02845676c-kube-api-access-rbxnv\") pod \"7887343b-04ca-42bf-b260-a2d02845676c\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.463222 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"3a5f0072-febc-4261-b06b-1a10ba6b3391\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.463434 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818fc868-fe09-4a91-aab2-91f11bac7386-combined-ca-bundle\") pod \"818fc868-fe09-4a91-aab2-91f11bac7386\" (UID: \"818fc868-fe09-4a91-aab2-91f11bac7386\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.463470 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ceb74cf-6023-4536-bc04-6667b5f48967-logs\") pod \"9ceb74cf-6023-4536-bc04-6667b5f48967\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.463493 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-config-data\") pod \"3a5f0072-febc-4261-b06b-1a10ba6b3391\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.463672 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818fc868-fe09-4a91-aab2-91f11bac7386-db-sync-config-data\") pod \"818fc868-fe09-4a91-aab2-91f11bac7386\" (UID: \"818fc868-fe09-4a91-aab2-91f11bac7386\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.463702 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5f0072-febc-4261-b06b-1a10ba6b3391-logs\") pod \"3a5f0072-febc-4261-b06b-1a10ba6b3391\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.463739 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-combined-ca-bundle\") pod \"7887343b-04ca-42bf-b260-a2d02845676c\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.463761 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-config-data\") pod \"7887343b-04ca-42bf-b260-a2d02845676c\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.463789 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-combined-ca-bundle\") pod \"9ceb74cf-6023-4536-bc04-6667b5f48967\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.463845 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-scripts\") pod \"7887343b-04ca-42bf-b260-a2d02845676c\" (UID: \"7887343b-04ca-42bf-b260-a2d02845676c\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.463877 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-config-data\") pod \"9ceb74cf-6023-4536-bc04-6667b5f48967\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.463918 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5f0072-febc-4261-b06b-1a10ba6b3391-httpd-run\") pod \"3a5f0072-febc-4261-b06b-1a10ba6b3391\" (UID: \"3a5f0072-febc-4261-b06b-1a10ba6b3391\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.463985 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-scripts\") pod \"9ceb74cf-6023-4536-bc04-6667b5f48967\" (UID: \"9ceb74cf-6023-4536-bc04-6667b5f48967\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.464049 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x59f6\" (UniqueName: \"kubernetes.io/projected/818fc868-fe09-4a91-aab2-91f11bac7386-kube-api-access-x59f6\") pod \"818fc868-fe09-4a91-aab2-91f11bac7386\" (UID: \"818fc868-fe09-4a91-aab2-91f11bac7386\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.485690 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ceb74cf-6023-4536-bc04-6667b5f48967-logs" (OuterVolumeSpecName: "logs") pod "9ceb74cf-6023-4536-bc04-6667b5f48967" (UID: "9ceb74cf-6023-4536-bc04-6667b5f48967"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.486412 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818fc868-fe09-4a91-aab2-91f11bac7386-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "818fc868-fe09-4a91-aab2-91f11bac7386" (UID: "818fc868-fe09-4a91-aab2-91f11bac7386"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.488453 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5f0072-febc-4261-b06b-1a10ba6b3391-logs" (OuterVolumeSpecName: "logs") pod "3a5f0072-febc-4261-b06b-1a10ba6b3391" (UID: "3a5f0072-febc-4261-b06b-1a10ba6b3391"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.488915 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5f0072-febc-4261-b06b-1a10ba6b3391-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3a5f0072-febc-4261-b06b-1a10ba6b3391" (UID: "3a5f0072-febc-4261-b06b-1a10ba6b3391"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.487228 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "3a5f0072-febc-4261-b06b-1a10ba6b3391" (UID: "3a5f0072-febc-4261-b06b-1a10ba6b3391"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.489568 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a5f0072-febc-4261-b06b-1a10ba6b3391-kube-api-access-mg4kl" (OuterVolumeSpecName: "kube-api-access-mg4kl") pod "3a5f0072-febc-4261-b06b-1a10ba6b3391" (UID: "3a5f0072-febc-4261-b06b-1a10ba6b3391"). InnerVolumeSpecName "kube-api-access-mg4kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.494483 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-scripts" (OuterVolumeSpecName: "scripts") pod "3a5f0072-febc-4261-b06b-1a10ba6b3391" (UID: "3a5f0072-febc-4261-b06b-1a10ba6b3391"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.494667 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7887343b-04ca-42bf-b260-a2d02845676c" (UID: "7887343b-04ca-42bf-b260-a2d02845676c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.501774 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ceb74cf-6023-4536-bc04-6667b5f48967-kube-api-access-jlh4m" (OuterVolumeSpecName: "kube-api-access-jlh4m") pod "9ceb74cf-6023-4536-bc04-6667b5f48967" (UID: "9ceb74cf-6023-4536-bc04-6667b5f48967"). InnerVolumeSpecName "kube-api-access-jlh4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.511371 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7887343b-04ca-42bf-b260-a2d02845676c-kube-api-access-rbxnv" (OuterVolumeSpecName: "kube-api-access-rbxnv") pod "7887343b-04ca-42bf-b260-a2d02845676c" (UID: "7887343b-04ca-42bf-b260-a2d02845676c"). InnerVolumeSpecName "kube-api-access-rbxnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.511887 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7887343b-04ca-42bf-b260-a2d02845676c" (UID: "7887343b-04ca-42bf-b260-a2d02845676c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.515435 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818fc868-fe09-4a91-aab2-91f11bac7386-kube-api-access-x59f6" (OuterVolumeSpecName: "kube-api-access-x59f6") pod "818fc868-fe09-4a91-aab2-91f11bac7386" (UID: "818fc868-fe09-4a91-aab2-91f11bac7386"). InnerVolumeSpecName "kube-api-access-x59f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.540035 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-scripts" (OuterVolumeSpecName: "scripts") pod "9ceb74cf-6023-4536-bc04-6667b5f48967" (UID: "9ceb74cf-6023-4536-bc04-6667b5f48967"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.539833 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-scripts" (OuterVolumeSpecName: "scripts") pod "7887343b-04ca-42bf-b260-a2d02845676c" (UID: "7887343b-04ca-42bf-b260-a2d02845676c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.576501 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.576541 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ceb74cf-6023-4536-bc04-6667b5f48967-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.576554 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a5f0072-febc-4261-b06b-1a10ba6b3391-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.576563 4756 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818fc868-fe09-4a91-aab2-91f11bac7386-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.576576 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.576584 4756 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a5f0072-febc-4261-b06b-1a10ba6b3391-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.576592 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.576600 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x59f6\" (UniqueName: \"kubernetes.io/projected/818fc868-fe09-4a91-aab2-91f11bac7386-kube-api-access-x59f6\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.576609 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg4kl\" (UniqueName: \"kubernetes.io/projected/3a5f0072-febc-4261-b06b-1a10ba6b3391-kube-api-access-mg4kl\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.576617 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.576625 4756 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.576633 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlh4m\" (UniqueName: \"kubernetes.io/projected/9ceb74cf-6023-4536-bc04-6667b5f48967-kube-api-access-jlh4m\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.576642 4756 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.576650 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbxnv\" (UniqueName: \"kubernetes.io/projected/7887343b-04ca-42bf-b260-a2d02845676c-kube-api-access-rbxnv\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.579225 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818fc868-fe09-4a91-aab2-91f11bac7386-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "818fc868-fe09-4a91-aab2-91f11bac7386" (UID: "818fc868-fe09-4a91-aab2-91f11bac7386"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.591267 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a5f0072-febc-4261-b06b-1a10ba6b3391" (UID: "3a5f0072-febc-4261-b06b-1a10ba6b3391"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.616158 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7887343b-04ca-42bf-b260-a2d02845676c" (UID: "7887343b-04ca-42bf-b260-a2d02845676c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.639539 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ceb74cf-6023-4536-bc04-6667b5f48967" (UID: "9ceb74cf-6023-4536-bc04-6667b5f48967"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.660552 4756 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.667436 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-config-data" (OuterVolumeSpecName: "config-data") pod "9ceb74cf-6023-4536-bc04-6667b5f48967" (UID: "9ceb74cf-6023-4536-bc04-6667b5f48967"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.690623 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-config-data" (OuterVolumeSpecName: "config-data") pod "7887343b-04ca-42bf-b260-a2d02845676c" (UID: "7887343b-04ca-42bf-b260-a2d02845676c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.711121 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.711166 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7887343b-04ca-42bf-b260-a2d02845676c-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.711178 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.711190 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ceb74cf-6023-4536-bc04-6667b5f48967-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.711201 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.711212 4756 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.711222 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818fc868-fe09-4a91-aab2-91f11bac7386-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.804998 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-config-data" (OuterVolumeSpecName: "config-data") pod "3a5f0072-febc-4261-b06b-1a10ba6b3391" (UID: "3a5f0072-febc-4261-b06b-1a10ba6b3391"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.818073 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a5f0072-febc-4261-b06b-1a10ba6b3391-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.876733 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.919808 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-ovsdbserver-nb\") pod \"cd94f14d-05e4-4d74-befd-c508535db058\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.919918 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6hbd\" (UniqueName: \"kubernetes.io/projected/cd94f14d-05e4-4d74-befd-c508535db058-kube-api-access-p6hbd\") pod \"cd94f14d-05e4-4d74-befd-c508535db058\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.919982 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-dns-swift-storage-0\") pod \"cd94f14d-05e4-4d74-befd-c508535db058\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.920120 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-ovsdbserver-sb\") pod \"cd94f14d-05e4-4d74-befd-c508535db058\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.920191 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-config\") pod \"cd94f14d-05e4-4d74-befd-c508535db058\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.920249 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-dns-svc\") pod \"cd94f14d-05e4-4d74-befd-c508535db058\" (UID: \"cd94f14d-05e4-4d74-befd-c508535db058\") " Dec 03 11:14:48 crc kubenswrapper[4756]: I1203 11:14:48.942611 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd94f14d-05e4-4d74-befd-c508535db058-kube-api-access-p6hbd" (OuterVolumeSpecName: "kube-api-access-p6hbd") pod "cd94f14d-05e4-4d74-befd-c508535db058" (UID: "cd94f14d-05e4-4d74-befd-c508535db058"). InnerVolumeSpecName "kube-api-access-p6hbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.019946 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cd94f14d-05e4-4d74-befd-c508535db058" (UID: "cd94f14d-05e4-4d74-befd-c508535db058"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.022510 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6hbd\" (UniqueName: \"kubernetes.io/projected/cd94f14d-05e4-4d74-befd-c508535db058-kube-api-access-p6hbd\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.022545 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.027981 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.036091 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd94f14d-05e4-4d74-befd-c508535db058" (UID: "cd94f14d-05e4-4d74-befd-c508535db058"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.041974 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd94f14d-05e4-4d74-befd-c508535db058" (UID: "cd94f14d-05e4-4d74-befd-c508535db058"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.045885 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd94f14d-05e4-4d74-befd-c508535db058" (UID: "cd94f14d-05e4-4d74-befd-c508535db058"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.091111 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-config" (OuterVolumeSpecName: "config") pod "cd94f14d-05e4-4d74-befd-c508535db058" (UID: "cd94f14d-05e4-4d74-befd-c508535db058"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.123927 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f23c1d8d-4584-4a9e-b51f-fe45687ded59-logs\") pod \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.124050 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-combined-ca-bundle\") pod \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.124085 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-config-data\") pod \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.124105 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f23c1d8d-4584-4a9e-b51f-fe45687ded59-httpd-run\") pod \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.124186 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.124242 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-scripts\") pod \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.124322 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9pcd\" (UniqueName: \"kubernetes.io/projected/f23c1d8d-4584-4a9e-b51f-fe45687ded59-kube-api-access-c9pcd\") pod \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\" (UID: \"f23c1d8d-4584-4a9e-b51f-fe45687ded59\") " Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.124826 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.124846 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.124855 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.124863 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd94f14d-05e4-4d74-befd-c508535db058-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.125884 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23c1d8d-4584-4a9e-b51f-fe45687ded59-logs" (OuterVolumeSpecName: "logs") pod "f23c1d8d-4584-4a9e-b51f-fe45687ded59" (UID: "f23c1d8d-4584-4a9e-b51f-fe45687ded59"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.126220 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23c1d8d-4584-4a9e-b51f-fe45687ded59-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f23c1d8d-4584-4a9e-b51f-fe45687ded59" (UID: "f23c1d8d-4584-4a9e-b51f-fe45687ded59"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.131931 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-scripts" (OuterVolumeSpecName: "scripts") pod "f23c1d8d-4584-4a9e-b51f-fe45687ded59" (UID: "f23c1d8d-4584-4a9e-b51f-fe45687ded59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.134087 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "f23c1d8d-4584-4a9e-b51f-fe45687ded59" (UID: "f23c1d8d-4584-4a9e-b51f-fe45687ded59"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.140982 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f23c1d8d-4584-4a9e-b51f-fe45687ded59-kube-api-access-c9pcd" (OuterVolumeSpecName: "kube-api-access-c9pcd") pod "f23c1d8d-4584-4a9e-b51f-fe45687ded59" (UID: "f23c1d8d-4584-4a9e-b51f-fe45687ded59"). InnerVolumeSpecName "kube-api-access-c9pcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.160436 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f23c1d8d-4584-4a9e-b51f-fe45687ded59" (UID: "f23c1d8d-4584-4a9e-b51f-fe45687ded59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.216997 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-config-data" (OuterVolumeSpecName: "config-data") pod "f23c1d8d-4584-4a9e-b51f-fe45687ded59" (UID: "f23c1d8d-4584-4a9e-b51f-fe45687ded59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.230736 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9pcd\" (UniqueName: \"kubernetes.io/projected/f23c1d8d-4584-4a9e-b51f-fe45687ded59-kube-api-access-c9pcd\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.230784 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f23c1d8d-4584-4a9e-b51f-fe45687ded59-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.230797 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.230815 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.230826 4756 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f23c1d8d-4584-4a9e-b51f-fe45687ded59-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.230865 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.230885 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f23c1d8d-4584-4a9e-b51f-fe45687ded59-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.338768 4756 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.443384 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" event={"ID":"cd94f14d-05e4-4d74-befd-c508535db058","Type":"ContainerDied","Data":"fcb7c24ce1107e0a286cb16bcb7535050149ebd694e516e290e313a20f758f94"} Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.443457 4756 scope.go:117] "RemoveContainer" containerID="5c3bb3f106b58476397ce489e98dd51c8c5a2c085a983f84edfea1a1dcea6ab8" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.443650 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-4r422" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.454184 4756 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.481439 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04","Type":"ContainerStarted","Data":"6a7314e6b3df22abd0621938a12d382ad93d68ac1af169f24f844be844f856a6"} Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.506904 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f23c1d8d-4584-4a9e-b51f-fe45687ded59","Type":"ContainerDied","Data":"b12af819660296088994282388b31aca03cee4b85a23265cc4bc6e2ba71c68ac"} Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.507230 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.509245 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-4r422"] Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.517400 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.518853 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-745d87f76f-mbrkc" event={"ID":"4234afcf-96f0-4340-b5c0-e1aac6c4dacb","Type":"ContainerStarted","Data":"b380846653390c909a5707909a04fb48b5041127ab7d5e6f2ea0da5d7fe20354"} Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.518899 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.539776 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-4r422"] Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.564315 4756 scope.go:117] "RemoveContainer" containerID="e59088fa63f99d325d2e3a0574be4d54f6a507149e33211d2c773c0cef4561c5" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.605264 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.640317 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.717057 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-774fbcb69f-lqf5k"] Dec 03 11:14:49 crc kubenswrapper[4756]: E1203 11:14:49.718304 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd94f14d-05e4-4d74-befd-c508535db058" containerName="dnsmasq-dns" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718338 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd94f14d-05e4-4d74-befd-c508535db058" containerName="dnsmasq-dns" Dec 03 11:14:49 crc kubenswrapper[4756]: E1203 11:14:49.718375 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7887343b-04ca-42bf-b260-a2d02845676c" containerName="keystone-bootstrap" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718386 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7887343b-04ca-42bf-b260-a2d02845676c" containerName="keystone-bootstrap" Dec 03 11:14:49 crc kubenswrapper[4756]: E1203 11:14:49.718424 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd94f14d-05e4-4d74-befd-c508535db058" containerName="init" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718432 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd94f14d-05e4-4d74-befd-c508535db058" containerName="init" Dec 03 11:14:49 crc kubenswrapper[4756]: E1203 11:14:49.718452 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818fc868-fe09-4a91-aab2-91f11bac7386" containerName="barbican-db-sync" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718460 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="818fc868-fe09-4a91-aab2-91f11bac7386" containerName="barbican-db-sync" Dec 03 11:14:49 crc kubenswrapper[4756]: E1203 11:14:49.718471 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ceb74cf-6023-4536-bc04-6667b5f48967" containerName="placement-db-sync" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718478 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ceb74cf-6023-4536-bc04-6667b5f48967" containerName="placement-db-sync" Dec 03 11:14:49 crc kubenswrapper[4756]: E1203 11:14:49.718496 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23c1d8d-4584-4a9e-b51f-fe45687ded59" containerName="glance-httpd" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718504 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23c1d8d-4584-4a9e-b51f-fe45687ded59" containerName="glance-httpd" Dec 03 11:14:49 crc kubenswrapper[4756]: E1203 11:14:49.718519 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5f0072-febc-4261-b06b-1a10ba6b3391" containerName="glance-log" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718526 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5f0072-febc-4261-b06b-1a10ba6b3391" containerName="glance-log" Dec 03 11:14:49 crc kubenswrapper[4756]: E1203 11:14:49.718533 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5f0072-febc-4261-b06b-1a10ba6b3391" containerName="glance-httpd" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718541 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5f0072-febc-4261-b06b-1a10ba6b3391" containerName="glance-httpd" Dec 03 11:14:49 crc kubenswrapper[4756]: E1203 11:14:49.718556 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23c1d8d-4584-4a9e-b51f-fe45687ded59" containerName="glance-log" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718563 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23c1d8d-4584-4a9e-b51f-fe45687ded59" containerName="glance-log" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718797 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5f0072-febc-4261-b06b-1a10ba6b3391" containerName="glance-log" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718819 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7887343b-04ca-42bf-b260-a2d02845676c" containerName="keystone-bootstrap" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718834 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd94f14d-05e4-4d74-befd-c508535db058" containerName="dnsmasq-dns" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718843 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f23c1d8d-4584-4a9e-b51f-fe45687ded59" containerName="glance-httpd" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718854 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f23c1d8d-4584-4a9e-b51f-fe45687ded59" containerName="glance-log" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718863 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5f0072-febc-4261-b06b-1a10ba6b3391" containerName="glance-httpd" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718877 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="818fc868-fe09-4a91-aab2-91f11bac7386" containerName="barbican-db-sync" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.718898 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ceb74cf-6023-4536-bc04-6667b5f48967" containerName="placement-db-sync" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.720279 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.731651 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-g2fts" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.731987 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.732230 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.761327 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.764272 4756 scope.go:117] "RemoveContainer" containerID="b85dbf6a936ef3511edefafddcb6dfda44753db77af7a4a9b96651f2d423d9eb" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.780819 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb0446e-9a35-4390-9290-b539e6a8718e-config-data\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.780901 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr2lc\" (UniqueName: \"kubernetes.io/projected/2fb0446e-9a35-4390-9290-b539e6a8718e-kube-api-access-gr2lc\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.781025 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb0446e-9a35-4390-9290-b539e6a8718e-combined-ca-bundle\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.781076 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fb0446e-9a35-4390-9290-b539e6a8718e-logs\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.781135 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fb0446e-9a35-4390-9290-b539e6a8718e-config-data-custom\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.806075 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7c6566fd84-f6lrg"] Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.807028 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.807641 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.822610 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-745d87f76f-mbrkc" podStartSLOduration=8.82256828 podStartE2EDuration="8.82256828s" podCreationTimestamp="2025-12-03 11:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:49.689782579 +0000 UTC m=+1300.719783823" watchObservedRunningTime="2025-12-03 11:14:49.82256828 +0000 UTC m=+1300.852569524" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.828785 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.839675 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.852688 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-b4cbg" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.853451 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.853626 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.853748 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xp874" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.853986 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.854098 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6796bdbbcb-s99wh"] Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.859840 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.860228 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.860276 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.873690 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.877355 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.883238 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fb0446e-9a35-4390-9290-b539e6a8718e-config-data-custom\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.883312 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb0446e-9a35-4390-9290-b539e6a8718e-config-data\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.883348 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr2lc\" (UniqueName: \"kubernetes.io/projected/2fb0446e-9a35-4390-9290-b539e6a8718e-kube-api-access-gr2lc\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.883413 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb0446e-9a35-4390-9290-b539e6a8718e-combined-ca-bundle\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.883448 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fb0446e-9a35-4390-9290-b539e6a8718e-logs\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.884053 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fb0446e-9a35-4390-9290-b539e6a8718e-logs\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.895738 4756 scope.go:117] "RemoveContainer" containerID="21abbddc77811ab7b29eb39ef2f3405138eb655ee0981a915c15b7d612096f42" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.899243 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fb0446e-9a35-4390-9290-b539e6a8718e-config-data-custom\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.900660 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c6566fd84-f6lrg"] Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.901584 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb0446e-9a35-4390-9290-b539e6a8718e-combined-ca-bundle\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.903176 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb0446e-9a35-4390-9290-b539e6a8718e-config-data\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.930592 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-774fbcb69f-lqf5k"] Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.935125 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5f6f9857bb-rncmk" podUID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.944332 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d57c9944-4rhnd"] Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.955813 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr2lc\" (UniqueName: \"kubernetes.io/projected/2fb0446e-9a35-4390-9290-b539e6a8718e-kube-api-access-gr2lc\") pod \"barbican-worker-774fbcb69f-lqf5k\" (UID: \"2fb0446e-9a35-4390-9290-b539e6a8718e\") " pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.956209 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.968600 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.969122 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wldm4" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.969658 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.970034 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.976857 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.970497 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985053 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-combined-ca-bundle\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985124 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0507f2c0-f9b3-490f-b2ac-45be22f96c05-logs\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985181 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-config-data-custom\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985204 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985235 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985257 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27fxx\" (UniqueName: \"kubernetes.io/projected/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-kube-api-access-27fxx\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985287 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-credential-keys\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985320 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985364 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-internal-tls-certs\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985423 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghmnd\" (UniqueName: \"kubernetes.io/projected/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-kube-api-access-ghmnd\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985453 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-config-data\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985487 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-config-data\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985514 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrbmj\" (UniqueName: \"kubernetes.io/projected/0507f2c0-f9b3-490f-b2ac-45be22f96c05-kube-api-access-xrbmj\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985535 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-scripts\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985553 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-public-tls-certs\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985574 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-config-data\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985594 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-scripts\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985618 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-fernet-keys\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985639 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-combined-ca-bundle\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985664 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0507f2c0-f9b3-490f-b2ac-45be22f96c05-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:49 crc kubenswrapper[4756]: I1203 11:14:49.985681 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-logs\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.010602 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.047929 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.085605 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6796bdbbcb-s99wh"] Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.087143 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-scripts\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.087255 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-public-tls-certs\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.087379 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-config-data\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.088187 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-scripts\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.088345 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-fernet-keys\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.088482 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-combined-ca-bundle\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.088578 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-config-data\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.088663 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0507f2c0-f9b3-490f-b2ac-45be22f96c05-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.088733 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-logs\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.088858 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-public-tls-certs\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.088963 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-combined-ca-bundle\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.089036 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-combined-ca-bundle\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.089138 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0507f2c0-f9b3-490f-b2ac-45be22f96c05-logs\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.089221 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsdvc\" (UniqueName: \"kubernetes.io/projected/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-kube-api-access-fsdvc\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.089312 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-config-data-custom\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.089403 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.089503 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.089599 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27fxx\" (UniqueName: \"kubernetes.io/projected/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-kube-api-access-27fxx\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.089679 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-scripts\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.089764 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-credential-keys\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.089832 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-internal-tls-certs\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.089946 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.090069 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-logs\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.090183 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-internal-tls-certs\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.090837 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghmnd\" (UniqueName: \"kubernetes.io/projected/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-kube-api-access-ghmnd\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.090942 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-config-data\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.091114 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-config-data\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.091522 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrbmj\" (UniqueName: \"kubernetes.io/projected/0507f2c0-f9b3-490f-b2ac-45be22f96c05-kube-api-access-xrbmj\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.100623 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-774fbcb69f-lqf5k" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.106360 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-scripts\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.107092 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-combined-ca-bundle\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.110590 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-fernet-keys\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.112241 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.112690 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0507f2c0-f9b3-490f-b2ac-45be22f96c05-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.113187 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-logs\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.117393 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-combined-ca-bundle\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.117756 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0507f2c0-f9b3-490f-b2ac-45be22f96c05-logs\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.121854 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-config-data\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.122272 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.124600 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-credential-keys\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.134716 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-internal-tls-certs\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.141138 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.141234 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-scripts\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.157597 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-config-data\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.158448 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-config-data-custom\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.162810 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-public-tls-certs\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.164483 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrbmj\" (UniqueName: \"kubernetes.io/projected/0507f2c0-f9b3-490f-b2ac-45be22f96c05-kube-api-access-xrbmj\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.184315 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-config-data\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.185491 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d57c9944-4rhnd"] Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.195071 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsdvc\" (UniqueName: \"kubernetes.io/projected/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-kube-api-access-fsdvc\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.195217 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-scripts\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.195252 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-internal-tls-certs\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.195294 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-logs\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.195422 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-config-data\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.195476 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-public-tls-certs\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.195512 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-combined-ca-bundle\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.199740 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-logs\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.209836 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cmmqm"] Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.220618 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.242800 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghmnd\" (UniqueName: \"kubernetes.io/projected/9b1ddcb6-cd3d-438f-a007-a1527ab5be16-kube-api-access-ghmnd\") pod \"keystone-7c6566fd84-f6lrg\" (UID: \"9b1ddcb6-cd3d-438f-a007-a1527ab5be16\") " pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.243202 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27fxx\" (UniqueName: \"kubernetes.io/projected/ffa571bc-b5f1-4b8f-be29-9eae5f21db25-kube-api-access-27fxx\") pod \"barbican-keystone-listener-6796bdbbcb-s99wh\" (UID: \"ffa571bc-b5f1-4b8f-be29-9eae5f21db25\") " pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.245624 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-scripts\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.246114 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-internal-tls-certs\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.250154 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-combined-ca-bundle\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.252428 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsdvc\" (UniqueName: \"kubernetes.io/projected/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-kube-api-access-fsdvc\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.263611 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66bc647888-tcn4m" podUID="00c35a0d-70b4-453d-974a-85b638505280" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.272674 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-public-tls-certs\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.278606 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.294364 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10ef169a-f6d1-4d7e-9ff1-8cca85adce2b-config-data\") pod \"placement-d57c9944-4rhnd\" (UID: \"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b\") " pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.295807 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.297859 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.300246 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.300329 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.300399 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.300431 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.300476 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbfqp\" (UniqueName: \"kubernetes.io/projected/d17d9ad0-25f3-4e48-916a-814feb88ee3a-kube-api-access-hbfqp\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.300658 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-dns-svc\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.300690 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-config\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.306898 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.351917 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.355078 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.400194 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.403888 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjxg9\" (UniqueName: \"kubernetes.io/projected/069bebae-8f44-4248-9a18-5d1228c32cf2-kube-api-access-rjxg9\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.403965 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.404035 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.404068 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.404109 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/069bebae-8f44-4248-9a18-5d1228c32cf2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.404138 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.404167 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbfqp\" (UniqueName: \"kubernetes.io/projected/d17d9ad0-25f3-4e48-916a-814feb88ee3a-kube-api-access-hbfqp\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.404200 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.404556 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.404636 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/069bebae-8f44-4248-9a18-5d1228c32cf2-logs\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.404830 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.404890 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.405086 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-dns-svc\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.405167 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-config\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.405338 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.406069 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.406368 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-dns-svc\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.406401 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-config\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.408032 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.414046 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cmmqm"] Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.468895 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbfqp\" (UniqueName: \"kubernetes.io/projected/d17d9ad0-25f3-4e48-916a-814feb88ee3a-kube-api-access-hbfqp\") pod \"dnsmasq-dns-85ff748b95-cmmqm\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.477274 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.507404 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjxg9\" (UniqueName: \"kubernetes.io/projected/069bebae-8f44-4248-9a18-5d1228c32cf2-kube-api-access-rjxg9\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.507522 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/069bebae-8f44-4248-9a18-5d1228c32cf2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.507552 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.507593 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.507632 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.507663 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/069bebae-8f44-4248-9a18-5d1228c32cf2-logs\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.507722 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.507757 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.513572 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/069bebae-8f44-4248-9a18-5d1228c32cf2-logs\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.517112 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/069bebae-8f44-4248-9a18-5d1228c32cf2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.517299 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.519073 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.531860 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.532937 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.539852 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjxg9\" (UniqueName: \"kubernetes.io/projected/069bebae-8f44-4248-9a18-5d1228c32cf2-kube-api-access-rjxg9\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.541225 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.549149 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6f87d4bff8-sdtnv"] Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.552198 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.557052 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.588266 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.589979 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.598422 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7fh72" event={"ID":"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96","Type":"ContainerStarted","Data":"2cc4f65ff8a9b938578d5f21a8239fa46348a808451dcc67ad254c18a485195d"} Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.669807 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f87d4bff8-sdtnv"] Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.711625 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-combined-ca-bundle\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.712010 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5hqx\" (UniqueName: \"kubernetes.io/projected/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-kube-api-access-d5hqx\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.712145 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-logs\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.712244 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-config-data\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.712566 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-config-data-custom\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.737884 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.799899 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7fh72" podStartSLOduration=4.213401223 podStartE2EDuration="50.79986912s" podCreationTimestamp="2025-12-03 11:14:00 +0000 UTC" firstStartedPulling="2025-12-03 11:14:01.874687473 +0000 UTC m=+1252.904688717" lastFinishedPulling="2025-12-03 11:14:48.46115537 +0000 UTC m=+1299.491156614" observedRunningTime="2025-12-03 11:14:50.644235198 +0000 UTC m=+1301.674236442" watchObservedRunningTime="2025-12-03 11:14:50.79986912 +0000 UTC m=+1301.829870364" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.817320 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-logs\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.817394 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-config-data\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.817464 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-config-data-custom\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.817506 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-combined-ca-bundle\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.817563 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5hqx\" (UniqueName: \"kubernetes.io/projected/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-kube-api-access-d5hqx\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.819000 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-logs\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.830992 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-config-data-custom\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.845844 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.851343 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-combined-ca-bundle\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.854723 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5hqx\" (UniqueName: \"kubernetes.io/projected/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-kube-api-access-d5hqx\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.860349 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-config-data\") pod \"barbican-api-6f87d4bff8-sdtnv\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.932691 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:50 crc kubenswrapper[4756]: I1203 11:14:50.988890 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-774fbcb69f-lqf5k"] Dec 03 11:14:51 crc kubenswrapper[4756]: I1203 11:14:51.187601 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6796bdbbcb-s99wh"] Dec 03 11:14:51 crc kubenswrapper[4756]: I1203 11:14:51.290061 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a5f0072-febc-4261-b06b-1a10ba6b3391" path="/var/lib/kubelet/pods/3a5f0072-febc-4261-b06b-1a10ba6b3391/volumes" Dec 03 11:14:51 crc kubenswrapper[4756]: I1203 11:14:51.292842 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd94f14d-05e4-4d74-befd-c508535db058" path="/var/lib/kubelet/pods/cd94f14d-05e4-4d74-befd-c508535db058/volumes" Dec 03 11:14:51 crc kubenswrapper[4756]: I1203 11:14:51.293659 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f23c1d8d-4584-4a9e-b51f-fe45687ded59" path="/var/lib/kubelet/pods/f23c1d8d-4584-4a9e-b51f-fe45687ded59/volumes" Dec 03 11:14:51 crc kubenswrapper[4756]: I1203 11:14:51.680876 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" event={"ID":"ffa571bc-b5f1-4b8f-be29-9eae5f21db25","Type":"ContainerStarted","Data":"2f1b15179fc7c1b520782318235a7b842490c1017557030a17b02de83daef34b"} Dec 03 11:14:51 crc kubenswrapper[4756]: I1203 11:14:51.683919 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d57c9944-4rhnd"] Dec 03 11:14:51 crc kubenswrapper[4756]: I1203 11:14:51.695213 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-774fbcb69f-lqf5k" event={"ID":"2fb0446e-9a35-4390-9290-b539e6a8718e","Type":"ContainerStarted","Data":"f801be1c3c41414fb70512b148eb19b3ec8f805664c60f52183487721e8b93d6"} Dec 03 11:14:51 crc kubenswrapper[4756]: I1203 11:14:51.705239 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c6566fd84-f6lrg"] Dec 03 11:14:51 crc kubenswrapper[4756]: I1203 11:14:51.731191 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cmmqm"] Dec 03 11:14:51 crc kubenswrapper[4756]: I1203 11:14:51.838965 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:14:51 crc kubenswrapper[4756]: I1203 11:14:51.971837 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:14:52 crc kubenswrapper[4756]: I1203 11:14:52.010399 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f87d4bff8-sdtnv"] Dec 03 11:14:52 crc kubenswrapper[4756]: W1203 11:14:52.080784 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd26e7e64_2332_45c9_a67a_dcaa6a43dc5d.slice/crio-649d3a72c8ae4497b188e447f2ca8d2ab6cb27e4aa51326927caea8579ec95b3 WatchSource:0}: Error finding container 649d3a72c8ae4497b188e447f2ca8d2ab6cb27e4aa51326927caea8579ec95b3: Status 404 returned error can't find the container with id 649d3a72c8ae4497b188e447f2ca8d2ab6cb27e4aa51326927caea8579ec95b3 Dec 03 11:14:52 crc kubenswrapper[4756]: I1203 11:14:52.738401 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f87d4bff8-sdtnv" event={"ID":"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d","Type":"ContainerStarted","Data":"649d3a72c8ae4497b188e447f2ca8d2ab6cb27e4aa51326927caea8579ec95b3"} Dec 03 11:14:52 crc kubenswrapper[4756]: I1203 11:14:52.741109 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c6566fd84-f6lrg" event={"ID":"9b1ddcb6-cd3d-438f-a007-a1527ab5be16","Type":"ContainerStarted","Data":"e0fa209e0c521a3e5bf32a69f64040cdbe10a8548ae47345ce6fa8e95f8ca485"} Dec 03 11:14:52 crc kubenswrapper[4756]: I1203 11:14:52.743066 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"069bebae-8f44-4248-9a18-5d1228c32cf2","Type":"ContainerStarted","Data":"469674dd1f87e0360358ab7d7342491adc36e0c5a3357c143ba8dd650233da2a"} Dec 03 11:14:52 crc kubenswrapper[4756]: I1203 11:14:52.744301 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" event={"ID":"d17d9ad0-25f3-4e48-916a-814feb88ee3a","Type":"ContainerStarted","Data":"d47f2a07775690bd9393e70b9174554d378890c435ed6ac3b2dec8cc09b6989b"} Dec 03 11:14:52 crc kubenswrapper[4756]: I1203 11:14:52.745499 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d57c9944-4rhnd" event={"ID":"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b","Type":"ContainerStarted","Data":"5cfa85215cd76e42e0004d50929e09311dcb518d485266219592d7594bd384fd"} Dec 03 11:14:52 crc kubenswrapper[4756]: I1203 11:14:52.746749 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0507f2c0-f9b3-490f-b2ac-45be22f96c05","Type":"ContainerStarted","Data":"93216fc6237a44e02fcd4d3c34bc22d27bc6190de701bcf1020d476eabf31d8c"} Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.778149 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f87d4bff8-sdtnv" event={"ID":"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d","Type":"ContainerStarted","Data":"52cf40f8eac376ac4390e7a9105c7685055dfea46f395492db5f8f41472d0643"} Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.779433 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.779493 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f87d4bff8-sdtnv" event={"ID":"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d","Type":"ContainerStarted","Data":"21ada5d2ba2f401d2950eb31b7582fac5cec5063620dea08c151f247dca3b386"} Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.779521 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.792338 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c6566fd84-f6lrg" event={"ID":"9b1ddcb6-cd3d-438f-a007-a1527ab5be16","Type":"ContainerStarted","Data":"c7466e8b20652674cd218c55d351221045ec8ee709d06d9eb4d5d395c11be84e"} Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.793558 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.808505 4756 generic.go:334] "Generic (PLEG): container finished" podID="d17d9ad0-25f3-4e48-916a-814feb88ee3a" containerID="15a435a2e29ca696f66d287ca4ad932904754a732a572e462d2fc642ae5f0e84" exitCode=0 Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.808646 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" event={"ID":"d17d9ad0-25f3-4e48-916a-814feb88ee3a","Type":"ContainerDied","Data":"15a435a2e29ca696f66d287ca4ad932904754a732a572e462d2fc642ae5f0e84"} Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.813386 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6f87d4bff8-sdtnv" podStartSLOduration=3.813359297 podStartE2EDuration="3.813359297s" podCreationTimestamp="2025-12-03 11:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:53.799238068 +0000 UTC m=+1304.829239312" watchObservedRunningTime="2025-12-03 11:14:53.813359297 +0000 UTC m=+1304.843360541" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.829358 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d57c9944-4rhnd" event={"ID":"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b","Type":"ContainerStarted","Data":"a4b15700cbba186673f79b9f95eb46f9ab7873a3cfc7c4d841f7f63cc9f76050"} Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.829419 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d57c9944-4rhnd" event={"ID":"10ef169a-f6d1-4d7e-9ff1-8cca85adce2b","Type":"ContainerStarted","Data":"dc3ca79ff1d94f6f0072ad260c59eafe6f35e2098ff100bb7b7c9ed503e25331"} Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.830424 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.830460 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.835910 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7c6566fd84-f6lrg" podStartSLOduration=4.835896579 podStartE2EDuration="4.835896579s" podCreationTimestamp="2025-12-03 11:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:53.83272374 +0000 UTC m=+1304.862724984" watchObservedRunningTime="2025-12-03 11:14:53.835896579 +0000 UTC m=+1304.865897813" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.848083 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0507f2c0-f9b3-490f-b2ac-45be22f96c05","Type":"ContainerStarted","Data":"55ef2c10a25c455e62cb0a614e6a482155febc18f3aa686f80e8f9e268e7f293"} Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.913040 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d57c9944-4rhnd" podStartSLOduration=4.913017438 podStartE2EDuration="4.913017438s" podCreationTimestamp="2025-12-03 11:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:53.910461088 +0000 UTC m=+1304.940462332" watchObservedRunningTime="2025-12-03 11:14:53.913017438 +0000 UTC m=+1304.943018682" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.961723 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75c47c8598-6kchw"] Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.964186 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.978925 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.979242 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.987123 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75c47c8598-6kchw"] Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.993394 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-public-tls-certs\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.993489 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-config-data\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.993528 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09b51d51-7a9c-4b73-a277-c488661e4af0-logs\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.993556 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-config-data-custom\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.993588 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwdtz\" (UniqueName: \"kubernetes.io/projected/09b51d51-7a9c-4b73-a277-c488661e4af0-kube-api-access-pwdtz\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.993674 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-combined-ca-bundle\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:53 crc kubenswrapper[4756]: I1203 11:14:53.993712 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-internal-tls-certs\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.106763 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-public-tls-certs\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.110316 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-config-data\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.110526 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09b51d51-7a9c-4b73-a277-c488661e4af0-logs\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.110661 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-config-data-custom\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.110805 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwdtz\" (UniqueName: \"kubernetes.io/projected/09b51d51-7a9c-4b73-a277-c488661e4af0-kube-api-access-pwdtz\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.111138 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-combined-ca-bundle\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.111264 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-internal-tls-certs\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.111294 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09b51d51-7a9c-4b73-a277-c488661e4af0-logs\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.118326 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-config-data\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.126915 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-combined-ca-bundle\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.127505 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-config-data-custom\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.135401 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-internal-tls-certs\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.136404 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b51d51-7a9c-4b73-a277-c488661e4af0-public-tls-certs\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.149359 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwdtz\" (UniqueName: \"kubernetes.io/projected/09b51d51-7a9c-4b73-a277-c488661e4af0-kube-api-access-pwdtz\") pod \"barbican-api-75c47c8598-6kchw\" (UID: \"09b51d51-7a9c-4b73-a277-c488661e4af0\") " pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.312323 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.876173 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"069bebae-8f44-4248-9a18-5d1228c32cf2","Type":"ContainerStarted","Data":"d4c8f09b515eb62bf4ec2daadbc55c6db14607f89fad07e890899c860ff31e1c"} Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.894037 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" event={"ID":"d17d9ad0-25f3-4e48-916a-814feb88ee3a","Type":"ContainerStarted","Data":"3cf389bd7067c8db40032784d07a94706fd2c55384528a5414be9ba5bfcc15f1"} Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.895114 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:14:54 crc kubenswrapper[4756]: I1203 11:14:54.955071 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" podStartSLOduration=5.955047972 podStartE2EDuration="5.955047972s" podCreationTimestamp="2025-12-03 11:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:54.929568889 +0000 UTC m=+1305.959570133" watchObservedRunningTime="2025-12-03 11:14:54.955047972 +0000 UTC m=+1305.985049216" Dec 03 11:14:55 crc kubenswrapper[4756]: I1203 11:14:55.913763 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0507f2c0-f9b3-490f-b2ac-45be22f96c05","Type":"ContainerStarted","Data":"57ef720a099194ecbf8b5b1c61e2a1bce3f746f846b58554616b2e68920c1550"} Dec 03 11:14:55 crc kubenswrapper[4756]: I1203 11:14:55.952734 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.952704555 podStartE2EDuration="6.952704555s" podCreationTimestamp="2025-12-03 11:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:55.948091641 +0000 UTC m=+1306.978092885" watchObservedRunningTime="2025-12-03 11:14:55.952704555 +0000 UTC m=+1306.982705799" Dec 03 11:14:57 crc kubenswrapper[4756]: I1203 11:14:57.441859 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75c47c8598-6kchw"] Dec 03 11:14:57 crc kubenswrapper[4756]: I1203 11:14:57.952423 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"069bebae-8f44-4248-9a18-5d1228c32cf2","Type":"ContainerStarted","Data":"4dd4714faa7b8bce20126f15df29eab17b958ed87dc2f18b4fb516cc286970be"} Dec 03 11:14:57 crc kubenswrapper[4756]: I1203 11:14:57.958374 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" event={"ID":"ffa571bc-b5f1-4b8f-be29-9eae5f21db25","Type":"ContainerStarted","Data":"fa4d1876b2b09878b0ddc22ab551065499163ee6dbe7dbbe892dd80e7e1cc015"} Dec 03 11:14:57 crc kubenswrapper[4756]: I1203 11:14:57.961308 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-774fbcb69f-lqf5k" event={"ID":"2fb0446e-9a35-4390-9290-b539e6a8718e","Type":"ContainerStarted","Data":"ad44a0816a908f5b2e0eb11e109fea9be5df11c039c698f45ba467a1aa0e74d1"} Dec 03 11:14:57 crc kubenswrapper[4756]: I1203 11:14:57.999158 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.999135841 podStartE2EDuration="8.999135841s" podCreationTimestamp="2025-12-03 11:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:14:57.973259856 +0000 UTC m=+1309.003261100" watchObservedRunningTime="2025-12-03 11:14:57.999135841 +0000 UTC m=+1309.029137085" Dec 03 11:14:59 crc kubenswrapper[4756]: I1203 11:14:59.925708 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5f6f9857bb-rncmk" podUID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.152654 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6"] Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.155787 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.162063 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.162362 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.201137 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-645zk\" (UniqueName: \"kubernetes.io/projected/0a4a245b-e6cc-418e-9fbc-6270c50fb523-kube-api-access-645zk\") pod \"collect-profiles-29412675-j86q6\" (UID: \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.201884 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a4a245b-e6cc-418e-9fbc-6270c50fb523-config-volume\") pod \"collect-profiles-29412675-j86q6\" (UID: \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.202207 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a4a245b-e6cc-418e-9fbc-6270c50fb523-secret-volume\") pod \"collect-profiles-29412675-j86q6\" (UID: \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.203338 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6"] Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.255651 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66bc647888-tcn4m" podUID="00c35a0d-70b4-453d-974a-85b638505280" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.305339 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a4a245b-e6cc-418e-9fbc-6270c50fb523-secret-volume\") pod \"collect-profiles-29412675-j86q6\" (UID: \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.305480 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-645zk\" (UniqueName: \"kubernetes.io/projected/0a4a245b-e6cc-418e-9fbc-6270c50fb523-kube-api-access-645zk\") pod \"collect-profiles-29412675-j86q6\" (UID: \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.305942 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a4a245b-e6cc-418e-9fbc-6270c50fb523-config-volume\") pod \"collect-profiles-29412675-j86q6\" (UID: \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.307777 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a4a245b-e6cc-418e-9fbc-6270c50fb523-config-volume\") pod \"collect-profiles-29412675-j86q6\" (UID: \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.327171 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-645zk\" (UniqueName: \"kubernetes.io/projected/0a4a245b-e6cc-418e-9fbc-6270c50fb523-kube-api-access-645zk\") pod \"collect-profiles-29412675-j86q6\" (UID: \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.337537 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a4a245b-e6cc-418e-9fbc-6270c50fb523-secret-volume\") pod \"collect-profiles-29412675-j86q6\" (UID: \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.530420 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.591457 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.591525 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.638546 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.650859 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.739198 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.850511 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.851071 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.855142 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-6kdvr"] Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.855510 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" podUID="9eadf119-7f74-4248-bd3f-1eabc2cdbf92" containerName="dnsmasq-dns" containerID="cri-o://a9a5e4493e4bb0868fea47f3040c94de7637bcc857abf28583f64b587b8a495e" gracePeriod=10 Dec 03 11:15:00 crc kubenswrapper[4756]: I1203 11:15:00.925380 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:01 crc kubenswrapper[4756]: I1203 11:15:01.136441 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:01 crc kubenswrapper[4756]: I1203 11:15:01.163646 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 11:15:01 crc kubenswrapper[4756]: I1203 11:15:01.164172 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 11:15:01 crc kubenswrapper[4756]: I1203 11:15:01.164198 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:01 crc kubenswrapper[4756]: I1203 11:15:01.164435 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:01 crc kubenswrapper[4756]: E1203 11:15:01.644253 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eadf119_7f74_4248_bd3f_1eabc2cdbf92.slice/crio-conmon-a9a5e4493e4bb0868fea47f3040c94de7637bcc857abf28583f64b587b8a495e.scope\": RecentStats: unable to find data in memory cache]" Dec 03 11:15:02 crc kubenswrapper[4756]: I1203 11:15:02.161931 4756 generic.go:334] "Generic (PLEG): container finished" podID="9eadf119-7f74-4248-bd3f-1eabc2cdbf92" containerID="a9a5e4493e4bb0868fea47f3040c94de7637bcc857abf28583f64b587b8a495e" exitCode=0 Dec 03 11:15:02 crc kubenswrapper[4756]: I1203 11:15:02.162002 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" event={"ID":"9eadf119-7f74-4248-bd3f-1eabc2cdbf92","Type":"ContainerDied","Data":"a9a5e4493e4bb0868fea47f3040c94de7637bcc857abf28583f64b587b8a495e"} Dec 03 11:15:02 crc kubenswrapper[4756]: I1203 11:15:02.362338 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" podUID="9eadf119-7f74-4248-bd3f-1eabc2cdbf92" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Dec 03 11:15:03 crc kubenswrapper[4756]: I1203 11:15:03.184832 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 11:15:03 crc kubenswrapper[4756]: I1203 11:15:03.185594 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 11:15:04 crc kubenswrapper[4756]: I1203 11:15:04.573666 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 11:15:04 crc kubenswrapper[4756]: I1203 11:15:04.574260 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 11:15:04 crc kubenswrapper[4756]: I1203 11:15:04.691673 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:15:04 crc kubenswrapper[4756]: I1203 11:15:04.749319 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 11:15:04 crc kubenswrapper[4756]: I1203 11:15:04.867782 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:04 crc kubenswrapper[4756]: I1203 11:15:04.904792 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:15:05 crc kubenswrapper[4756]: I1203 11:15:05.225694 4756 generic.go:334] "Generic (PLEG): container finished" podID="aa7e078c-dfed-40c1-ac1c-d9db28aa9d96" containerID="2cc4f65ff8a9b938578d5f21a8239fa46348a808451dcc67ad254c18a485195d" exitCode=0 Dec 03 11:15:05 crc kubenswrapper[4756]: I1203 11:15:05.227027 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7fh72" event={"ID":"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96","Type":"ContainerDied","Data":"2cc4f65ff8a9b938578d5f21a8239fa46348a808451dcc67ad254c18a485195d"} Dec 03 11:15:05 crc kubenswrapper[4756]: I1203 11:15:05.811642 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.328736 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.358495 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" podUID="9eadf119-7f74-4248-bd3f-1eabc2cdbf92" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Dec 03 11:15:07 crc kubenswrapper[4756]: W1203 11:15:07.529356 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09b51d51_7a9c_4b73_a277_c488661e4af0.slice/crio-f970120dc4e81f67836f4f607b13f773896cf2e65c80f200f1b9fde430589b67 WatchSource:0}: Error finding container f970120dc4e81f67836f4f607b13f773896cf2e65c80f200f1b9fde430589b67: Status 404 returned error can't find the container with id f970120dc4e81f67836f4f607b13f773896cf2e65c80f200f1b9fde430589b67 Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.631810 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7fh72" Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.732039 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-config-data\") pod \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.732133 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-db-sync-config-data\") pod \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.732214 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-etc-machine-id\") pod \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.732262 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-combined-ca-bundle\") pod \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.732415 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aa7e078c-dfed-40c1-ac1c-d9db28aa9d96" (UID: "aa7e078c-dfed-40c1-ac1c-d9db28aa9d96"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.732471 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-scripts\") pod \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.732635 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps5lm\" (UniqueName: \"kubernetes.io/projected/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-kube-api-access-ps5lm\") pod \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\" (UID: \"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96\") " Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.733149 4756 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.746251 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-scripts" (OuterVolumeSpecName: "scripts") pod "aa7e078c-dfed-40c1-ac1c-d9db28aa9d96" (UID: "aa7e078c-dfed-40c1-ac1c-d9db28aa9d96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.747175 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "aa7e078c-dfed-40c1-ac1c-d9db28aa9d96" (UID: "aa7e078c-dfed-40c1-ac1c-d9db28aa9d96"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.770937 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-kube-api-access-ps5lm" (OuterVolumeSpecName: "kube-api-access-ps5lm") pod "aa7e078c-dfed-40c1-ac1c-d9db28aa9d96" (UID: "aa7e078c-dfed-40c1-ac1c-d9db28aa9d96"). InnerVolumeSpecName "kube-api-access-ps5lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.788364 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa7e078c-dfed-40c1-ac1c-d9db28aa9d96" (UID: "aa7e078c-dfed-40c1-ac1c-d9db28aa9d96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.834910 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.834997 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps5lm\" (UniqueName: \"kubernetes.io/projected/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-kube-api-access-ps5lm\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.835018 4756 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.835031 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.852030 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-config-data" (OuterVolumeSpecName: "config-data") pod "aa7e078c-dfed-40c1-ac1c-d9db28aa9d96" (UID: "aa7e078c-dfed-40c1-ac1c-d9db28aa9d96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:07 crc kubenswrapper[4756]: I1203 11:15:07.937077 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:08 crc kubenswrapper[4756]: I1203 11:15:08.274040 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7fh72" Dec 03 11:15:08 crc kubenswrapper[4756]: I1203 11:15:08.274042 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7fh72" event={"ID":"aa7e078c-dfed-40c1-ac1c-d9db28aa9d96","Type":"ContainerDied","Data":"dea6f57540fdff2bd6327650a7624c48ed2850917a780d8d9f435e9203e3a229"} Dec 03 11:15:08 crc kubenswrapper[4756]: I1203 11:15:08.274236 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dea6f57540fdff2bd6327650a7624c48ed2850917a780d8d9f435e9203e3a229" Dec 03 11:15:08 crc kubenswrapper[4756]: I1203 11:15:08.275850 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75c47c8598-6kchw" event={"ID":"09b51d51-7a9c-4b73-a277-c488661e4af0","Type":"ContainerStarted","Data":"f970120dc4e81f67836f4f607b13f773896cf2e65c80f200f1b9fde430589b67"} Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.046467 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 11:15:09 crc kubenswrapper[4756]: E1203 11:15:09.046987 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7e078c-dfed-40c1-ac1c-d9db28aa9d96" containerName="cinder-db-sync" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.047003 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7e078c-dfed-40c1-ac1c-d9db28aa9d96" containerName="cinder-db-sync" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.047248 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa7e078c-dfed-40c1-ac1c-d9db28aa9d96" containerName="cinder-db-sync" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.049853 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.054227 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xngvc" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.062830 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.063664 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.076540 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.076564 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.164264 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcljm\" (UniqueName: \"kubernetes.io/projected/e3b76a2e-0804-4167-884f-ea552dd0ef7a-kube-api-access-bcljm\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.164836 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-config-data\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.164872 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3b76a2e-0804-4167-884f-ea552dd0ef7a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.164964 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.165021 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.165150 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-scripts\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.266750 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.266822 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.266860 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-scripts\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.266905 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcljm\" (UniqueName: \"kubernetes.io/projected/e3b76a2e-0804-4167-884f-ea552dd0ef7a-kube-api-access-bcljm\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.266963 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-config-data\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.266987 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3b76a2e-0804-4167-884f-ea552dd0ef7a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.267108 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3b76a2e-0804-4167-884f-ea552dd0ef7a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.273317 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.273351 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.273562 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.315032 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.340906 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-scripts\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.343350 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-config-data\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.348644 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.356106 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcljm\" (UniqueName: \"kubernetes.io/projected/e3b76a2e-0804-4167-884f-ea552dd0ef7a-kube-api-access-bcljm\") pod \"cinder-scheduler-0\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.401084 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xngvc" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.414431 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.606450 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nc9r6"] Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.610213 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nc9r6"] Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.610258 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" event={"ID":"9eadf119-7f74-4248-bd3f-1eabc2cdbf92","Type":"ContainerDied","Data":"0b1bd800190fbe8a3f2d9eb8a089797afae6f6612dbcd13bf12726aa06ee4b52"} Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.610295 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b1bd800190fbe8a3f2d9eb8a089797afae6f6612dbcd13bf12726aa06ee4b52" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.610414 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.624272 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.663449 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.669377 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2d6381a-8b32-4174-93e9-e573dead9f60-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.669427 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5dwp\" (UniqueName: \"kubernetes.io/projected/a2d6381a-8b32-4174-93e9-e573dead9f60-kube-api-access-d5dwp\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.669453 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.669480 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.669495 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-scripts\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.669518 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-config\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.669534 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.669556 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-config-data\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.669574 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg9b6\" (UniqueName: \"kubernetes.io/projected/3582e52f-4806-401b-a822-6a98777de800-kube-api-access-tg9b6\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.669595 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.669656 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.669710 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2d6381a-8b32-4174-93e9-e573dead9f60-logs\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.669727 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-config-data-custom\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.670835 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.699773 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.771994 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2d6381a-8b32-4174-93e9-e573dead9f60-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.772047 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5dwp\" (UniqueName: \"kubernetes.io/projected/a2d6381a-8b32-4174-93e9-e573dead9f60-kube-api-access-d5dwp\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.772085 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.772127 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.772157 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-scripts\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.772192 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-config\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.772218 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.772249 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-config-data\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.772285 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg9b6\" (UniqueName: \"kubernetes.io/projected/3582e52f-4806-401b-a822-6a98777de800-kube-api-access-tg9b6\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.772325 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.772400 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.772472 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2d6381a-8b32-4174-93e9-e573dead9f60-logs\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.772504 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-config-data-custom\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.773441 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2d6381a-8b32-4174-93e9-e573dead9f60-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.777513 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.778084 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-config\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.780074 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2d6381a-8b32-4174-93e9-e573dead9f60-logs\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.780469 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.781243 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.782211 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.792149 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.816745 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-scripts\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.820095 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-config-data-custom\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.820197 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-config-data\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.822530 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg9b6\" (UniqueName: \"kubernetes.io/projected/3582e52f-4806-401b-a822-6a98777de800-kube-api-access-tg9b6\") pod \"dnsmasq-dns-5c9776ccc5-nc9r6\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.874431 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-ovsdbserver-sb\") pod \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.874483 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-config\") pod \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.874550 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-ovsdbserver-nb\") pod \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.874593 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqz9l\" (UniqueName: \"kubernetes.io/projected/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-kube-api-access-wqz9l\") pod \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.874669 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-dns-svc\") pod \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.874714 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-dns-swift-storage-0\") pod \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\" (UID: \"9eadf119-7f74-4248-bd3f-1eabc2cdbf92\") " Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.906939 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5dwp\" (UniqueName: \"kubernetes.io/projected/a2d6381a-8b32-4174-93e9-e573dead9f60-kube-api-access-d5dwp\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.907740 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " pod="openstack/cinder-api-0" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.921529 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-kube-api-access-wqz9l" (OuterVolumeSpecName: "kube-api-access-wqz9l") pod "9eadf119-7f74-4248-bd3f-1eabc2cdbf92" (UID: "9eadf119-7f74-4248-bd3f-1eabc2cdbf92"). InnerVolumeSpecName "kube-api-access-wqz9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:09 crc kubenswrapper[4756]: I1203 11:15:09.977497 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqz9l\" (UniqueName: \"kubernetes.io/projected/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-kube-api-access-wqz9l\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.015998 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9eadf119-7f74-4248-bd3f-1eabc2cdbf92" (UID: "9eadf119-7f74-4248-bd3f-1eabc2cdbf92"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.016321 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-config" (OuterVolumeSpecName: "config") pod "9eadf119-7f74-4248-bd3f-1eabc2cdbf92" (UID: "9eadf119-7f74-4248-bd3f-1eabc2cdbf92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.022650 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9eadf119-7f74-4248-bd3f-1eabc2cdbf92" (UID: "9eadf119-7f74-4248-bd3f-1eabc2cdbf92"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.028715 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9eadf119-7f74-4248-bd3f-1eabc2cdbf92" (UID: "9eadf119-7f74-4248-bd3f-1eabc2cdbf92"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.045085 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9eadf119-7f74-4248-bd3f-1eabc2cdbf92" (UID: "9eadf119-7f74-4248-bd3f-1eabc2cdbf92"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.086030 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.086090 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.086110 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.086123 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.086141 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9eadf119-7f74-4248-bd3f-1eabc2cdbf92-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.133044 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.197671 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.264538 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66bc647888-tcn4m" podUID="00c35a0d-70b4-453d-974a-85b638505280" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.264631 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.265601 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"7b757dbbd271bf889135a4e68662a58696ae5a3097ea2a472cbfee50d00094ed"} pod="openstack/horizon-66bc647888-tcn4m" containerMessage="Container horizon failed startup probe, will be restarted" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.265643 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66bc647888-tcn4m" podUID="00c35a0d-70b4-453d-974a-85b638505280" containerName="horizon" containerID="cri-o://7b757dbbd271bf889135a4e68662a58696ae5a3097ea2a472cbfee50d00094ed" gracePeriod=30 Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.266504 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6"] Dec 03 11:15:10 crc kubenswrapper[4756]: W1203 11:15:10.285855 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a4a245b_e6cc_418e_9fbc_6270c50fb523.slice/crio-4a01c8007b3a5a631907614b374b63d85327ba6be21c9f117f3c9eb5cdc977d8 WatchSource:0}: Error finding container 4a01c8007b3a5a631907614b374b63d85327ba6be21c9f117f3c9eb5cdc977d8: Status 404 returned error can't find the container with id 4a01c8007b3a5a631907614b374b63d85327ba6be21c9f117f3c9eb5cdc977d8 Dec 03 11:15:10 crc kubenswrapper[4756]: E1203 11:15:10.327804 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Dec 03 11:15:10 crc kubenswrapper[4756]: E1203 11:15:10.328083 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crb5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(450e58a8-d0fc-4a72-9ad1-e7a7b7394d04): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 11:15:10 crc kubenswrapper[4756]: E1203 11:15:10.329334 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.402217 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-6kdvr" Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.403043 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" event={"ID":"0a4a245b-e6cc-418e-9fbc-6270c50fb523","Type":"ContainerStarted","Data":"4a01c8007b3a5a631907614b374b63d85327ba6be21c9f117f3c9eb5cdc977d8"} Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.403608 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" containerName="ceilometer-notification-agent" containerID="cri-o://c214dc8518b11d04534ee969b0aba8a1acc2eda4665e7c78b51cb50102b8e306" gracePeriod=30 Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.403797 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" containerName="sg-core" containerID="cri-o://6a7314e6b3df22abd0621938a12d382ad93d68ac1af169f24f844be844f856a6" gracePeriod=30 Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.422897 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.534333 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-6kdvr"] Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.545992 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-6kdvr"] Dec 03 11:15:10 crc kubenswrapper[4756]: I1203 11:15:10.984075 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nc9r6"] Dec 03 11:15:11 crc kubenswrapper[4756]: W1203 11:15:11.013291 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3582e52f_4806_401b_a822_6a98777de800.slice/crio-25c6dfb4854023460618ae07bbd436b233aa44436b8c16eb79332b83417bbf5d WatchSource:0}: Error finding container 25c6dfb4854023460618ae07bbd436b233aa44436b8c16eb79332b83417bbf5d: Status 404 returned error can't find the container with id 25c6dfb4854023460618ae07bbd436b233aa44436b8c16eb79332b83417bbf5d Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.023405 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.278155 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eadf119-7f74-4248-bd3f-1eabc2cdbf92" path="/var/lib/kubelet/pods/9eadf119-7f74-4248-bd3f-1eabc2cdbf92/volumes" Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.435669 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" event={"ID":"ffa571bc-b5f1-4b8f-be29-9eae5f21db25","Type":"ContainerStarted","Data":"acc0ee5ae22443a37d963cfc9280d5b1c7ea7f0e0cf4e1fa3851c14f45405334"} Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.438624 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3b76a2e-0804-4167-884f-ea552dd0ef7a","Type":"ContainerStarted","Data":"a69838fe55753c013d958ce4e92b951be9660fb39f86bb81467ec8a0e7b823ac"} Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.451427 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-774fbcb69f-lqf5k" event={"ID":"2fb0446e-9a35-4390-9290-b539e6a8718e","Type":"ContainerStarted","Data":"b1fe8804fe6620b9e1fed90bfdd2c9788706b347523249d52d28c704de1e89d6"} Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.459686 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6796bdbbcb-s99wh" podStartSLOduration=16.949916858 podStartE2EDuration="22.459661867s" podCreationTimestamp="2025-12-03 11:14:49 +0000 UTC" firstStartedPulling="2025-12-03 11:14:51.241193053 +0000 UTC m=+1302.271194297" lastFinishedPulling="2025-12-03 11:14:56.750938062 +0000 UTC m=+1307.780939306" observedRunningTime="2025-12-03 11:15:11.457057345 +0000 UTC m=+1322.487058589" watchObservedRunningTime="2025-12-03 11:15:11.459661867 +0000 UTC m=+1322.489663111" Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.490016 4756 generic.go:334] "Generic (PLEG): container finished" podID="450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" containerID="6a7314e6b3df22abd0621938a12d382ad93d68ac1af169f24f844be844f856a6" exitCode=2 Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.490624 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04","Type":"ContainerDied","Data":"6a7314e6b3df22abd0621938a12d382ad93d68ac1af169f24f844be844f856a6"} Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.505856 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" event={"ID":"3582e52f-4806-401b-a822-6a98777de800","Type":"ContainerStarted","Data":"3783de38208ecb2938fb168f5806e4de388c223fe6929b72fba33ae68c374e26"} Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.506079 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" event={"ID":"3582e52f-4806-401b-a822-6a98777de800","Type":"ContainerStarted","Data":"25c6dfb4854023460618ae07bbd436b233aa44436b8c16eb79332b83417bbf5d"} Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.538024 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-745d87f76f-mbrkc" Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.554143 4756 generic.go:334] "Generic (PLEG): container finished" podID="0a4a245b-e6cc-418e-9fbc-6270c50fb523" containerID="1e4521f4a388aff990ad75d51e2328e9aae105ed9f3f21397f9e99c62c1b741e" exitCode=0 Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.554343 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" event={"ID":"0a4a245b-e6cc-418e-9fbc-6270c50fb523","Type":"ContainerDied","Data":"1e4521f4a388aff990ad75d51e2328e9aae105ed9f3f21397f9e99c62c1b741e"} Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.563554 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-774fbcb69f-lqf5k" podStartSLOduration=16.941585386 podStartE2EDuration="22.563523677s" podCreationTimestamp="2025-12-03 11:14:49 +0000 UTC" firstStartedPulling="2025-12-03 11:14:51.116178652 +0000 UTC m=+1302.146179896" lastFinishedPulling="2025-12-03 11:14:56.738116943 +0000 UTC m=+1307.768118187" observedRunningTime="2025-12-03 11:15:11.494262173 +0000 UTC m=+1322.524263437" watchObservedRunningTime="2025-12-03 11:15:11.563523677 +0000 UTC m=+1322.593524921" Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.571913 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a2d6381a-8b32-4174-93e9-e573dead9f60","Type":"ContainerStarted","Data":"5977a7631febb908003d96afc4098dc1dc8bbbe0d4072cf5d888b395028bb02e"} Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.607416 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75c47c8598-6kchw" event={"ID":"09b51d51-7a9c-4b73-a277-c488661e4af0","Type":"ContainerStarted","Data":"dc4009c8fe6d143ceb5baa52b805a6884c33107dd9622581e269f13f8b7351f3"} Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.607502 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75c47c8598-6kchw" event={"ID":"09b51d51-7a9c-4b73-a277-c488661e4af0","Type":"ContainerStarted","Data":"b4d6e7078390cfa270501a52b3e21c0ea452452e3c26565251c39def21882ebc"} Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.608011 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.609240 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.815085 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7f7c586c7d-n86cw"] Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.815597 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7f7c586c7d-n86cw" podUID="2990fd3a-f317-4628-a783-6238277ff18f" containerName="neutron-api" containerID="cri-o://475abb4eb0d1a6febb4c5ad6c0005153650ab5be29d6fe73473ba22868e9fd4d" gracePeriod=30 Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.815841 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7f7c586c7d-n86cw" podUID="2990fd3a-f317-4628-a783-6238277ff18f" containerName="neutron-httpd" containerID="cri-o://1175dbb598e852115e35598807fd73e576ffcd00994e82e04614502f48c80feb" gracePeriod=30 Dec 03 11:15:11 crc kubenswrapper[4756]: I1203 11:15:11.819183 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75c47c8598-6kchw" podStartSLOduration=18.819152882 podStartE2EDuration="18.819152882s" podCreationTimestamp="2025-12-03 11:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:15:11.692351346 +0000 UTC m=+1322.722352600" watchObservedRunningTime="2025-12-03 11:15:11.819152882 +0000 UTC m=+1322.849154126" Dec 03 11:15:12 crc kubenswrapper[4756]: I1203 11:15:12.272734 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 03 11:15:12 crc kubenswrapper[4756]: I1203 11:15:12.623235 4756 generic.go:334] "Generic (PLEG): container finished" podID="3582e52f-4806-401b-a822-6a98777de800" containerID="3783de38208ecb2938fb168f5806e4de388c223fe6929b72fba33ae68c374e26" exitCode=0 Dec 03 11:15:12 crc kubenswrapper[4756]: I1203 11:15:12.625078 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" event={"ID":"3582e52f-4806-401b-a822-6a98777de800","Type":"ContainerDied","Data":"3783de38208ecb2938fb168f5806e4de388c223fe6929b72fba33ae68c374e26"} Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.081397 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.250059 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a4a245b-e6cc-418e-9fbc-6270c50fb523-config-volume\") pod \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\" (UID: \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\") " Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.250581 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a4a245b-e6cc-418e-9fbc-6270c50fb523-secret-volume\") pod \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\" (UID: \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\") " Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.250750 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-645zk\" (UniqueName: \"kubernetes.io/projected/0a4a245b-e6cc-418e-9fbc-6270c50fb523-kube-api-access-645zk\") pod \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\" (UID: \"0a4a245b-e6cc-418e-9fbc-6270c50fb523\") " Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.251142 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a4a245b-e6cc-418e-9fbc-6270c50fb523-config-volume" (OuterVolumeSpecName: "config-volume") pod "0a4a245b-e6cc-418e-9fbc-6270c50fb523" (UID: "0a4a245b-e6cc-418e-9fbc-6270c50fb523"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.252284 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a4a245b-e6cc-418e-9fbc-6270c50fb523-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.258548 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a4a245b-e6cc-418e-9fbc-6270c50fb523-kube-api-access-645zk" (OuterVolumeSpecName: "kube-api-access-645zk") pod "0a4a245b-e6cc-418e-9fbc-6270c50fb523" (UID: "0a4a245b-e6cc-418e-9fbc-6270c50fb523"). InnerVolumeSpecName "kube-api-access-645zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.259641 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4a245b-e6cc-418e-9fbc-6270c50fb523-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0a4a245b-e6cc-418e-9fbc-6270c50fb523" (UID: "0a4a245b-e6cc-418e-9fbc-6270c50fb523"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.316637 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.354254 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-645zk\" (UniqueName: \"kubernetes.io/projected/0a4a245b-e6cc-418e-9fbc-6270c50fb523-kube-api-access-645zk\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.354298 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a4a245b-e6cc-418e-9fbc-6270c50fb523-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.676849 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3b76a2e-0804-4167-884f-ea552dd0ef7a","Type":"ContainerStarted","Data":"3cbcdf001c88602447a5eb8f23cd03a3027990538a752865144a084872773fc1"} Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.681734 4756 generic.go:334] "Generic (PLEG): container finished" podID="450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" containerID="c214dc8518b11d04534ee969b0aba8a1acc2eda4665e7c78b51cb50102b8e306" exitCode=0 Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.681795 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04","Type":"ContainerDied","Data":"c214dc8518b11d04534ee969b0aba8a1acc2eda4665e7c78b51cb50102b8e306"} Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.686716 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" event={"ID":"3582e52f-4806-401b-a822-6a98777de800","Type":"ContainerStarted","Data":"8cba40079cd98cf117961a71cc7e2f86cd16ab55ab1bd7e13792b484a85cda3d"} Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.689272 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.692785 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" event={"ID":"0a4a245b-e6cc-418e-9fbc-6270c50fb523","Type":"ContainerDied","Data":"4a01c8007b3a5a631907614b374b63d85327ba6be21c9f117f3c9eb5cdc977d8"} Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.692808 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6" Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.692826 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a01c8007b3a5a631907614b374b63d85327ba6be21c9f117f3c9eb5cdc977d8" Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.696509 4756 generic.go:334] "Generic (PLEG): container finished" podID="2990fd3a-f317-4628-a783-6238277ff18f" containerID="1175dbb598e852115e35598807fd73e576ffcd00994e82e04614502f48c80feb" exitCode=0 Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.696567 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f7c586c7d-n86cw" event={"ID":"2990fd3a-f317-4628-a783-6238277ff18f","Type":"ContainerDied","Data":"1175dbb598e852115e35598807fd73e576ffcd00994e82e04614502f48c80feb"} Dec 03 11:15:13 crc kubenswrapper[4756]: I1203 11:15:13.698821 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a2d6381a-8b32-4174-93e9-e573dead9f60","Type":"ContainerStarted","Data":"b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5"} Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.243889 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.253054 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" podStartSLOduration=5.2530287940000004 podStartE2EDuration="5.253028794s" podCreationTimestamp="2025-12-03 11:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:15:13.721722442 +0000 UTC m=+1324.751723686" watchObservedRunningTime="2025-12-03 11:15:14.253028794 +0000 UTC m=+1325.283030038" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.346759 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-log-httpd\") pod \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.347323 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-run-httpd\") pod \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.347874 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-sg-core-conf-yaml\") pod \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.348067 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crb5r\" (UniqueName: \"kubernetes.io/projected/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-kube-api-access-crb5r\") pod \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.348195 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-scripts\") pod \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.349157 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-config-data\") pod \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.347390 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" (UID: "450e58a8-d0fc-4a72-9ad1-e7a7b7394d04"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.347752 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" (UID: "450e58a8-d0fc-4a72-9ad1-e7a7b7394d04"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.349326 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-combined-ca-bundle\") pod \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\" (UID: \"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04\") " Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.350364 4756 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.350568 4756 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.362898 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-kube-api-access-crb5r" (OuterVolumeSpecName: "kube-api-access-crb5r") pod "450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" (UID: "450e58a8-d0fc-4a72-9ad1-e7a7b7394d04"). InnerVolumeSpecName "kube-api-access-crb5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.370186 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-scripts" (OuterVolumeSpecName: "scripts") pod "450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" (UID: "450e58a8-d0fc-4a72-9ad1-e7a7b7394d04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.394389 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" (UID: "450e58a8-d0fc-4a72-9ad1-e7a7b7394d04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.422204 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-config-data" (OuterVolumeSpecName: "config-data") pod "450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" (UID: "450e58a8-d0fc-4a72-9ad1-e7a7b7394d04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.445979 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" (UID: "450e58a8-d0fc-4a72-9ad1-e7a7b7394d04"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.452827 4756 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.452860 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crb5r\" (UniqueName: \"kubernetes.io/projected/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-kube-api-access-crb5r\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.452881 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.452890 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.452900 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.719759 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a2d6381a-8b32-4174-93e9-e573dead9f60","Type":"ContainerStarted","Data":"39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e"} Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.720341 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.719903 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a2d6381a-8b32-4174-93e9-e573dead9f60" containerName="cinder-api-log" containerID="cri-o://b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5" gracePeriod=30 Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.720422 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a2d6381a-8b32-4174-93e9-e573dead9f60" containerName="cinder-api" containerID="cri-o://39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e" gracePeriod=30 Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.733329 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3b76a2e-0804-4167-884f-ea552dd0ef7a","Type":"ContainerStarted","Data":"c3ebdca78537f2f81cc0ce3ef42057c5de02346e27e341dae62a75b6dacbd248"} Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.757324 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.756937454 podStartE2EDuration="5.756937454s" podCreationTimestamp="2025-12-03 11:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:15:14.747715716 +0000 UTC m=+1325.777716960" watchObservedRunningTime="2025-12-03 11:15:14.756937454 +0000 UTC m=+1325.786938698" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.758360 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.758512 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"450e58a8-d0fc-4a72-9ad1-e7a7b7394d04","Type":"ContainerDied","Data":"319b8dc922af5291124e96ff3e937299659e85a274f9428c91834987f245d817"} Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.758570 4756 scope.go:117] "RemoveContainer" containerID="6a7314e6b3df22abd0621938a12d382ad93d68ac1af169f24f844be844f856a6" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.788133 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.636293103 podStartE2EDuration="5.788099023s" podCreationTimestamp="2025-12-03 11:15:09 +0000 UTC" firstStartedPulling="2025-12-03 11:15:10.523379453 +0000 UTC m=+1321.553380697" lastFinishedPulling="2025-12-03 11:15:11.675185373 +0000 UTC m=+1322.705186617" observedRunningTime="2025-12-03 11:15:14.783322974 +0000 UTC m=+1325.813324218" watchObservedRunningTime="2025-12-03 11:15:14.788099023 +0000 UTC m=+1325.818100267" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.809266 4756 scope.go:117] "RemoveContainer" containerID="c214dc8518b11d04534ee969b0aba8a1acc2eda4665e7c78b51cb50102b8e306" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.887152 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.918032 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.932391 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:15:14 crc kubenswrapper[4756]: E1203 11:15:14.933041 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" containerName="sg-core" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.933062 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" containerName="sg-core" Dec 03 11:15:14 crc kubenswrapper[4756]: E1203 11:15:14.933080 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4a245b-e6cc-418e-9fbc-6270c50fb523" containerName="collect-profiles" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.933087 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4a245b-e6cc-418e-9fbc-6270c50fb523" containerName="collect-profiles" Dec 03 11:15:14 crc kubenswrapper[4756]: E1203 11:15:14.933102 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" containerName="ceilometer-notification-agent" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.933109 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" containerName="ceilometer-notification-agent" Dec 03 11:15:14 crc kubenswrapper[4756]: E1203 11:15:14.933130 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eadf119-7f74-4248-bd3f-1eabc2cdbf92" containerName="init" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.933136 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eadf119-7f74-4248-bd3f-1eabc2cdbf92" containerName="init" Dec 03 11:15:14 crc kubenswrapper[4756]: E1203 11:15:14.933165 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eadf119-7f74-4248-bd3f-1eabc2cdbf92" containerName="dnsmasq-dns" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.933185 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eadf119-7f74-4248-bd3f-1eabc2cdbf92" containerName="dnsmasq-dns" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.933389 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eadf119-7f74-4248-bd3f-1eabc2cdbf92" containerName="dnsmasq-dns" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.933426 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a4a245b-e6cc-418e-9fbc-6270c50fb523" containerName="collect-profiles" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.933438 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" containerName="ceilometer-notification-agent" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.933452 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" containerName="sg-core" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.935589 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.941412 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.941607 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 11:15:14 crc kubenswrapper[4756]: I1203 11:15:14.942270 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.079701 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.079780 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-config-data\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.079842 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18785e02-8cf4-49ce-97bf-90543dea446c-log-httpd\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.079865 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.079894 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18785e02-8cf4-49ce-97bf-90543dea446c-run-httpd\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.080144 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-scripts\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.081230 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg5lg\" (UniqueName: \"kubernetes.io/projected/18785e02-8cf4-49ce-97bf-90543dea446c-kube-api-access-sg5lg\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.184077 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18785e02-8cf4-49ce-97bf-90543dea446c-log-httpd\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.184568 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.184604 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18785e02-8cf4-49ce-97bf-90543dea446c-run-httpd\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.184723 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-scripts\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.184869 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg5lg\" (UniqueName: \"kubernetes.io/projected/18785e02-8cf4-49ce-97bf-90543dea446c-kube-api-access-sg5lg\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.184909 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.184942 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-config-data\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.186144 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18785e02-8cf4-49ce-97bf-90543dea446c-log-httpd\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.186549 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18785e02-8cf4-49ce-97bf-90543dea446c-run-httpd\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.191785 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.193096 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-config-data\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.207669 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-scripts\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.208420 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg5lg\" (UniqueName: \"kubernetes.io/projected/18785e02-8cf4-49ce-97bf-90543dea446c-kube-api-access-sg5lg\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.228200 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.333678 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.366410 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="450e58a8-d0fc-4a72-9ad1-e7a7b7394d04" path="/var/lib/kubelet/pods/450e58a8-d0fc-4a72-9ad1-e7a7b7394d04/volumes" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.509867 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.547765 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2d6381a-8b32-4174-93e9-e573dead9f60-logs\") pod \"a2d6381a-8b32-4174-93e9-e573dead9f60\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.547845 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5dwp\" (UniqueName: \"kubernetes.io/projected/a2d6381a-8b32-4174-93e9-e573dead9f60-kube-api-access-d5dwp\") pod \"a2d6381a-8b32-4174-93e9-e573dead9f60\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.547871 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-scripts\") pod \"a2d6381a-8b32-4174-93e9-e573dead9f60\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.547910 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-combined-ca-bundle\") pod \"a2d6381a-8b32-4174-93e9-e573dead9f60\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.548092 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-config-data\") pod \"a2d6381a-8b32-4174-93e9-e573dead9f60\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.548120 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-config-data-custom\") pod \"a2d6381a-8b32-4174-93e9-e573dead9f60\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.548226 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2d6381a-8b32-4174-93e9-e573dead9f60-etc-machine-id\") pod \"a2d6381a-8b32-4174-93e9-e573dead9f60\" (UID: \"a2d6381a-8b32-4174-93e9-e573dead9f60\") " Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.548571 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d6381a-8b32-4174-93e9-e573dead9f60-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a2d6381a-8b32-4174-93e9-e573dead9f60" (UID: "a2d6381a-8b32-4174-93e9-e573dead9f60"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.549273 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2d6381a-8b32-4174-93e9-e573dead9f60-logs" (OuterVolumeSpecName: "logs") pod "a2d6381a-8b32-4174-93e9-e573dead9f60" (UID: "a2d6381a-8b32-4174-93e9-e573dead9f60"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.560468 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a2d6381a-8b32-4174-93e9-e573dead9f60" (UID: "a2d6381a-8b32-4174-93e9-e573dead9f60"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.578364 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-scripts" (OuterVolumeSpecName: "scripts") pod "a2d6381a-8b32-4174-93e9-e573dead9f60" (UID: "a2d6381a-8b32-4174-93e9-e573dead9f60"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.595269 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d6381a-8b32-4174-93e9-e573dead9f60-kube-api-access-d5dwp" (OuterVolumeSpecName: "kube-api-access-d5dwp") pod "a2d6381a-8b32-4174-93e9-e573dead9f60" (UID: "a2d6381a-8b32-4174-93e9-e573dead9f60"). InnerVolumeSpecName "kube-api-access-d5dwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.632219 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2d6381a-8b32-4174-93e9-e573dead9f60" (UID: "a2d6381a-8b32-4174-93e9-e573dead9f60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.656282 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2d6381a-8b32-4174-93e9-e573dead9f60-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.656339 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5dwp\" (UniqueName: \"kubernetes.io/projected/a2d6381a-8b32-4174-93e9-e573dead9f60-kube-api-access-d5dwp\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.656350 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.656361 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.656370 4756 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.656378 4756 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2d6381a-8b32-4174-93e9-e573dead9f60-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.808685 4756 generic.go:334] "Generic (PLEG): container finished" podID="a2d6381a-8b32-4174-93e9-e573dead9f60" containerID="39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e" exitCode=0 Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.808736 4756 generic.go:334] "Generic (PLEG): container finished" podID="a2d6381a-8b32-4174-93e9-e573dead9f60" containerID="b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5" exitCode=143 Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.808833 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a2d6381a-8b32-4174-93e9-e573dead9f60","Type":"ContainerDied","Data":"39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e"} Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.808875 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a2d6381a-8b32-4174-93e9-e573dead9f60","Type":"ContainerDied","Data":"b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5"} Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.808890 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a2d6381a-8b32-4174-93e9-e573dead9f60","Type":"ContainerDied","Data":"5977a7631febb908003d96afc4098dc1dc8bbbe0d4072cf5d888b395028bb02e"} Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.808909 4756 scope.go:117] "RemoveContainer" containerID="39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.810642 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-config-data" (OuterVolumeSpecName: "config-data") pod "a2d6381a-8b32-4174-93e9-e573dead9f60" (UID: "a2d6381a-8b32-4174-93e9-e573dead9f60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.810888 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.865272 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d6381a-8b32-4174-93e9-e573dead9f60-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.963837 4756 scope.go:117] "RemoveContainer" containerID="b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5" Dec 03 11:15:15 crc kubenswrapper[4756]: I1203 11:15:15.983014 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.000930 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.013863 4756 scope.go:117] "RemoveContainer" containerID="39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e" Dec 03 11:15:16 crc kubenswrapper[4756]: E1203 11:15:16.015134 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e\": container with ID starting with 39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e not found: ID does not exist" containerID="39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.015177 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e"} err="failed to get container status \"39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e\": rpc error: code = NotFound desc = could not find container \"39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e\": container with ID starting with 39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e not found: ID does not exist" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.015206 4756 scope.go:117] "RemoveContainer" containerID="b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5" Dec 03 11:15:16 crc kubenswrapper[4756]: E1203 11:15:16.016048 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5\": container with ID starting with b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5 not found: ID does not exist" containerID="b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.016074 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5"} err="failed to get container status \"b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5\": rpc error: code = NotFound desc = could not find container \"b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5\": container with ID starting with b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5 not found: ID does not exist" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.016088 4756 scope.go:117] "RemoveContainer" containerID="39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.022097 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e"} err="failed to get container status \"39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e\": rpc error: code = NotFound desc = could not find container \"39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e\": container with ID starting with 39afdae74dd5a1e5a871064b303a102ab08ffa2f670e23f053cd5d81e82b220e not found: ID does not exist" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.022130 4756 scope.go:117] "RemoveContainer" containerID="b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.024022 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5"} err="failed to get container status \"b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5\": rpc error: code = NotFound desc = could not find container \"b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5\": container with ID starting with b1def224a1830dd9fffe7f32d9cf3b07555c14837310c3f4f3a56d706daca3f5 not found: ID does not exist" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.025750 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 03 11:15:16 crc kubenswrapper[4756]: E1203 11:15:16.026278 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d6381a-8b32-4174-93e9-e573dead9f60" containerName="cinder-api-log" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.026299 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d6381a-8b32-4174-93e9-e573dead9f60" containerName="cinder-api-log" Dec 03 11:15:16 crc kubenswrapper[4756]: E1203 11:15:16.026355 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d6381a-8b32-4174-93e9-e573dead9f60" containerName="cinder-api" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.026365 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d6381a-8b32-4174-93e9-e573dead9f60" containerName="cinder-api" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.026533 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d6381a-8b32-4174-93e9-e573dead9f60" containerName="cinder-api-log" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.026554 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d6381a-8b32-4174-93e9-e573dead9f60" containerName="cinder-api" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.030835 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.038398 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.038696 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.040486 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.057281 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.071688 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.071762 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-scripts\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.071791 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.071816 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-config-data-custom\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.071854 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cada869c-6167-4fd2-b8ad-470d18f09cf4-logs\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.071872 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.071892 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-config-data\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.071914 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvhg8\" (UniqueName: \"kubernetes.io/projected/cada869c-6167-4fd2-b8ad-470d18f09cf4-kube-api-access-mvhg8\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.072082 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cada869c-6167-4fd2-b8ad-470d18f09cf4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.123603 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.173212 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cada869c-6167-4fd2-b8ad-470d18f09cf4-logs\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.173265 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.173293 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-config-data\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.173312 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvhg8\" (UniqueName: \"kubernetes.io/projected/cada869c-6167-4fd2-b8ad-470d18f09cf4-kube-api-access-mvhg8\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.173352 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cada869c-6167-4fd2-b8ad-470d18f09cf4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.173419 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.173464 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-scripts\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.173487 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.173508 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-config-data-custom\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.173684 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cada869c-6167-4fd2-b8ad-470d18f09cf4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.176999 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cada869c-6167-4fd2-b8ad-470d18f09cf4-logs\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.179192 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-scripts\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.185428 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-config-data\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.186260 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.188326 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-config-data-custom\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.188737 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.189082 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cada869c-6167-4fd2-b8ad-470d18f09cf4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.202410 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvhg8\" (UniqueName: \"kubernetes.io/projected/cada869c-6167-4fd2-b8ad-470d18f09cf4-kube-api-access-mvhg8\") pod \"cinder-api-0\" (UID: \"cada869c-6167-4fd2-b8ad-470d18f09cf4\") " pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.317822 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.382260 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.891386 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18785e02-8cf4-49ce-97bf-90543dea446c","Type":"ContainerStarted","Data":"7afd4648f0dfd5e279c45cd0291356e82d30c723fc2227f2078bf47370163f0c"} Dec 03 11:15:16 crc kubenswrapper[4756]: I1203 11:15:16.986390 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 11:15:17 crc kubenswrapper[4756]: I1203 11:15:17.247442 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2d6381a-8b32-4174-93e9-e573dead9f60" path="/var/lib/kubelet/pods/a2d6381a-8b32-4174-93e9-e573dead9f60/volumes" Dec 03 11:15:17 crc kubenswrapper[4756]: I1203 11:15:17.909480 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18785e02-8cf4-49ce-97bf-90543dea446c","Type":"ContainerStarted","Data":"303966d7673f233bd9800d73634f121e60df9d6f2edd473d3c4386937e3c21c9"} Dec 03 11:15:17 crc kubenswrapper[4756]: I1203 11:15:17.910002 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18785e02-8cf4-49ce-97bf-90543dea446c","Type":"ContainerStarted","Data":"f2a32bf79de6ed0e680db33eaa4dfa49db208156ce7b54014ae00ba36fd0aed1"} Dec 03 11:15:17 crc kubenswrapper[4756]: I1203 11:15:17.915975 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cada869c-6167-4fd2-b8ad-470d18f09cf4","Type":"ContainerStarted","Data":"1d3b63da78acf70169555a6d4ab83fdaf1cf8109a049b220debd35a7647e67be"} Dec 03 11:15:17 crc kubenswrapper[4756]: I1203 11:15:17.916032 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cada869c-6167-4fd2-b8ad-470d18f09cf4","Type":"ContainerStarted","Data":"7aa2bc354f25db194f9cfa552973315aa2c2d6e02121aee88426848869122f9a"} Dec 03 11:15:17 crc kubenswrapper[4756]: I1203 11:15:17.919114 4756 generic.go:334] "Generic (PLEG): container finished" podID="2990fd3a-f317-4628-a783-6238277ff18f" containerID="475abb4eb0d1a6febb4c5ad6c0005153650ab5be29d6fe73473ba22868e9fd4d" exitCode=0 Dec 03 11:15:17 crc kubenswrapper[4756]: I1203 11:15:17.919165 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f7c586c7d-n86cw" event={"ID":"2990fd3a-f317-4628-a783-6238277ff18f","Type":"ContainerDied","Data":"475abb4eb0d1a6febb4c5ad6c0005153650ab5be29d6fe73473ba22868e9fd4d"} Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.310791 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.430548 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-ovndb-tls-certs\") pod \"2990fd3a-f317-4628-a783-6238277ff18f\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.430740 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-config\") pod \"2990fd3a-f317-4628-a783-6238277ff18f\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.430781 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg6nc\" (UniqueName: \"kubernetes.io/projected/2990fd3a-f317-4628-a783-6238277ff18f-kube-api-access-lg6nc\") pod \"2990fd3a-f317-4628-a783-6238277ff18f\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.430807 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-combined-ca-bundle\") pod \"2990fd3a-f317-4628-a783-6238277ff18f\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.430860 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-httpd-config\") pod \"2990fd3a-f317-4628-a783-6238277ff18f\" (UID: \"2990fd3a-f317-4628-a783-6238277ff18f\") " Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.438195 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2990fd3a-f317-4628-a783-6238277ff18f" (UID: "2990fd3a-f317-4628-a783-6238277ff18f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.445510 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2990fd3a-f317-4628-a783-6238277ff18f-kube-api-access-lg6nc" (OuterVolumeSpecName: "kube-api-access-lg6nc") pod "2990fd3a-f317-4628-a783-6238277ff18f" (UID: "2990fd3a-f317-4628-a783-6238277ff18f"). InnerVolumeSpecName "kube-api-access-lg6nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.524353 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2990fd3a-f317-4628-a783-6238277ff18f" (UID: "2990fd3a-f317-4628-a783-6238277ff18f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.534170 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg6nc\" (UniqueName: \"kubernetes.io/projected/2990fd3a-f317-4628-a783-6238277ff18f-kube-api-access-lg6nc\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.534236 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.534251 4756 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.559928 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-config" (OuterVolumeSpecName: "config") pod "2990fd3a-f317-4628-a783-6238277ff18f" (UID: "2990fd3a-f317-4628-a783-6238277ff18f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.567704 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2990fd3a-f317-4628-a783-6238277ff18f" (UID: "2990fd3a-f317-4628-a783-6238277ff18f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.636037 4756 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.636067 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2990fd3a-f317-4628-a783-6238277ff18f-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.935149 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18785e02-8cf4-49ce-97bf-90543dea446c","Type":"ContainerStarted","Data":"737eef6f6b2d8c3feec2f27b7ab5bfeaf0b88323d239f00717facba2b2b068d4"} Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.938834 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cada869c-6167-4fd2-b8ad-470d18f09cf4","Type":"ContainerStarted","Data":"d2e862161ae39527d54bab4725544481754414546a181830577b48a669e7f5d5"} Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.941391 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.945581 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f7c586c7d-n86cw" event={"ID":"2990fd3a-f317-4628-a783-6238277ff18f","Type":"ContainerDied","Data":"9e0abfc8042f51565ab8ee44882361f5cb3729cdf5bf22ead02df47e49b117bf"} Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.945635 4756 scope.go:117] "RemoveContainer" containerID="1175dbb598e852115e35598807fd73e576ffcd00994e82e04614502f48c80feb" Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.945779 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f7c586c7d-n86cw" Dec 03 11:15:18 crc kubenswrapper[4756]: I1203 11:15:18.982163 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.982131104 podStartE2EDuration="3.982131104s" podCreationTimestamp="2025-12-03 11:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:15:18.968492139 +0000 UTC m=+1329.998493393" watchObservedRunningTime="2025-12-03 11:15:18.982131104 +0000 UTC m=+1330.012132338" Dec 03 11:15:19 crc kubenswrapper[4756]: I1203 11:15:19.010869 4756 scope.go:117] "RemoveContainer" containerID="475abb4eb0d1a6febb4c5ad6c0005153650ab5be29d6fe73473ba22868e9fd4d" Dec 03 11:15:19 crc kubenswrapper[4756]: I1203 11:15:19.015793 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7f7c586c7d-n86cw"] Dec 03 11:15:19 crc kubenswrapper[4756]: I1203 11:15:19.055807 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7f7c586c7d-n86cw"] Dec 03 11:15:19 crc kubenswrapper[4756]: I1203 11:15:19.253182 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2990fd3a-f317-4628-a783-6238277ff18f" path="/var/lib/kubelet/pods/2990fd3a-f317-4628-a783-6238277ff18f/volumes" Dec 03 11:15:19 crc kubenswrapper[4756]: I1203 11:15:19.415606 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 03 11:15:19 crc kubenswrapper[4756]: I1203 11:15:19.712749 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 03 11:15:20 crc kubenswrapper[4756]: I1203 11:15:20.019553 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 11:15:20 crc kubenswrapper[4756]: I1203 11:15:20.135116 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:15:20 crc kubenswrapper[4756]: I1203 11:15:20.240140 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cmmqm"] Dec 03 11:15:20 crc kubenswrapper[4756]: I1203 11:15:20.240434 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" podUID="d17d9ad0-25f3-4e48-916a-814feb88ee3a" containerName="dnsmasq-dns" containerID="cri-o://3cf389bd7067c8db40032784d07a94706fd2c55384528a5414be9ba5bfcc15f1" gracePeriod=10 Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.003560 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.003817 4756 generic.go:334] "Generic (PLEG): container finished" podID="d17d9ad0-25f3-4e48-916a-814feb88ee3a" containerID="3cf389bd7067c8db40032784d07a94706fd2c55384528a5414be9ba5bfcc15f1" exitCode=0 Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.003857 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" event={"ID":"d17d9ad0-25f3-4e48-916a-814feb88ee3a","Type":"ContainerDied","Data":"3cf389bd7067c8db40032784d07a94706fd2c55384528a5414be9ba5bfcc15f1"} Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.004704 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" event={"ID":"d17d9ad0-25f3-4e48-916a-814feb88ee3a","Type":"ContainerDied","Data":"d47f2a07775690bd9393e70b9174554d378890c435ed6ac3b2dec8cc09b6989b"} Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.004784 4756 scope.go:117] "RemoveContainer" containerID="3cf389bd7067c8db40032784d07a94706fd2c55384528a5414be9ba5bfcc15f1" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.005143 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e3b76a2e-0804-4167-884f-ea552dd0ef7a" containerName="cinder-scheduler" containerID="cri-o://3cbcdf001c88602447a5eb8f23cd03a3027990538a752865144a084872773fc1" gracePeriod=30 Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.005290 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e3b76a2e-0804-4167-884f-ea552dd0ef7a" containerName="probe" containerID="cri-o://c3ebdca78537f2f81cc0ce3ef42057c5de02346e27e341dae62a75b6dacbd248" gracePeriod=30 Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.039400 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-ovsdbserver-nb\") pod \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.094214 4756 scope.go:117] "RemoveContainer" containerID="15a435a2e29ca696f66d287ca4ad932904754a732a572e462d2fc642ae5f0e84" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.128037 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d17d9ad0-25f3-4e48-916a-814feb88ee3a" (UID: "d17d9ad0-25f3-4e48-916a-814feb88ee3a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.136253 4756 scope.go:117] "RemoveContainer" containerID="3cf389bd7067c8db40032784d07a94706fd2c55384528a5414be9ba5bfcc15f1" Dec 03 11:15:21 crc kubenswrapper[4756]: E1203 11:15:21.137036 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cf389bd7067c8db40032784d07a94706fd2c55384528a5414be9ba5bfcc15f1\": container with ID starting with 3cf389bd7067c8db40032784d07a94706fd2c55384528a5414be9ba5bfcc15f1 not found: ID does not exist" containerID="3cf389bd7067c8db40032784d07a94706fd2c55384528a5414be9ba5bfcc15f1" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.137089 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cf389bd7067c8db40032784d07a94706fd2c55384528a5414be9ba5bfcc15f1"} err="failed to get container status \"3cf389bd7067c8db40032784d07a94706fd2c55384528a5414be9ba5bfcc15f1\": rpc error: code = NotFound desc = could not find container \"3cf389bd7067c8db40032784d07a94706fd2c55384528a5414be9ba5bfcc15f1\": container with ID starting with 3cf389bd7067c8db40032784d07a94706fd2c55384528a5414be9ba5bfcc15f1 not found: ID does not exist" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.137124 4756 scope.go:117] "RemoveContainer" containerID="15a435a2e29ca696f66d287ca4ad932904754a732a572e462d2fc642ae5f0e84" Dec 03 11:15:21 crc kubenswrapper[4756]: E1203 11:15:21.137357 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15a435a2e29ca696f66d287ca4ad932904754a732a572e462d2fc642ae5f0e84\": container with ID starting with 15a435a2e29ca696f66d287ca4ad932904754a732a572e462d2fc642ae5f0e84 not found: ID does not exist" containerID="15a435a2e29ca696f66d287ca4ad932904754a732a572e462d2fc642ae5f0e84" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.137389 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15a435a2e29ca696f66d287ca4ad932904754a732a572e462d2fc642ae5f0e84"} err="failed to get container status \"15a435a2e29ca696f66d287ca4ad932904754a732a572e462d2fc642ae5f0e84\": rpc error: code = NotFound desc = could not find container \"15a435a2e29ca696f66d287ca4ad932904754a732a572e462d2fc642ae5f0e84\": container with ID starting with 15a435a2e29ca696f66d287ca4ad932904754a732a572e462d2fc642ae5f0e84 not found: ID does not exist" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.142700 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbfqp\" (UniqueName: \"kubernetes.io/projected/d17d9ad0-25f3-4e48-916a-814feb88ee3a-kube-api-access-hbfqp\") pod \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.142826 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-ovsdbserver-sb\") pod \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.142903 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-dns-swift-storage-0\") pod \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.142969 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-config\") pod \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.143047 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-dns-svc\") pod \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\" (UID: \"d17d9ad0-25f3-4e48-916a-814feb88ee3a\") " Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.143559 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.179258 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d17d9ad0-25f3-4e48-916a-814feb88ee3a-kube-api-access-hbfqp" (OuterVolumeSpecName: "kube-api-access-hbfqp") pod "d17d9ad0-25f3-4e48-916a-814feb88ee3a" (UID: "d17d9ad0-25f3-4e48-916a-814feb88ee3a"). InnerVolumeSpecName "kube-api-access-hbfqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.230831 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d17d9ad0-25f3-4e48-916a-814feb88ee3a" (UID: "d17d9ad0-25f3-4e48-916a-814feb88ee3a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.246451 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbfqp\" (UniqueName: \"kubernetes.io/projected/d17d9ad0-25f3-4e48-916a-814feb88ee3a-kube-api-access-hbfqp\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.246488 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.248905 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d17d9ad0-25f3-4e48-916a-814feb88ee3a" (UID: "d17d9ad0-25f3-4e48-916a-814feb88ee3a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.264697 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d17d9ad0-25f3-4e48-916a-814feb88ee3a" (UID: "d17d9ad0-25f3-4e48-916a-814feb88ee3a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.337135 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-config" (OuterVolumeSpecName: "config") pod "d17d9ad0-25f3-4e48-916a-814feb88ee3a" (UID: "d17d9ad0-25f3-4e48-916a-814feb88ee3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.352358 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.352401 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:21 crc kubenswrapper[4756]: I1203 11:15:21.352420 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d17d9ad0-25f3-4e48-916a-814feb88ee3a-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.018912 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18785e02-8cf4-49ce-97bf-90543dea446c","Type":"ContainerStarted","Data":"a3ced57096b7d1e13bb21d986d2141a0e6f2a01c890edfebf7ddab79ff29dae4"} Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.020391 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.021564 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.080217 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.752471632 podStartE2EDuration="8.080184142s" podCreationTimestamp="2025-12-03 11:15:14 +0000 UTC" firstStartedPulling="2025-12-03 11:15:16.132100723 +0000 UTC m=+1327.162101967" lastFinishedPulling="2025-12-03 11:15:20.459813233 +0000 UTC m=+1331.489814477" observedRunningTime="2025-12-03 11:15:22.058098985 +0000 UTC m=+1333.088100249" watchObservedRunningTime="2025-12-03 11:15:22.080184142 +0000 UTC m=+1333.110185396" Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.090935 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cmmqm"] Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.108527 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cmmqm"] Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.120986 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.543463 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.612236 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.612323 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.625343 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75c47c8598-6kchw" Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.729541 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6f87d4bff8-sdtnv"] Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.731667 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6f87d4bff8-sdtnv" podUID="d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" containerName="barbican-api-log" containerID="cri-o://21ada5d2ba2f401d2950eb31b7582fac5cec5063620dea08c151f247dca3b386" gracePeriod=30 Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.731787 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6f87d4bff8-sdtnv" podUID="d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" containerName="barbican-api" containerID="cri-o://52cf40f8eac376ac4390e7a9105c7685055dfea46f395492db5f8f41472d0643" gracePeriod=30 Dec 03 11:15:22 crc kubenswrapper[4756]: I1203 11:15:22.993716 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d57c9944-4rhnd" Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.201215 4756 generic.go:334] "Generic (PLEG): container finished" podID="d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" containerID="21ada5d2ba2f401d2950eb31b7582fac5cec5063620dea08c151f247dca3b386" exitCode=143 Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.201337 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f87d4bff8-sdtnv" event={"ID":"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d","Type":"ContainerDied","Data":"21ada5d2ba2f401d2950eb31b7582fac5cec5063620dea08c151f247dca3b386"} Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.306846 4756 generic.go:334] "Generic (PLEG): container finished" podID="e3b76a2e-0804-4167-884f-ea552dd0ef7a" containerID="c3ebdca78537f2f81cc0ce3ef42057c5de02346e27e341dae62a75b6dacbd248" exitCode=0 Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.306909 4756 generic.go:334] "Generic (PLEG): container finished" podID="e3b76a2e-0804-4167-884f-ea552dd0ef7a" containerID="3cbcdf001c88602447a5eb8f23cd03a3027990538a752865144a084872773fc1" exitCode=0 Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.310967 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d17d9ad0-25f3-4e48-916a-814feb88ee3a" path="/var/lib/kubelet/pods/d17d9ad0-25f3-4e48-916a-814feb88ee3a/volumes" Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.324226 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3b76a2e-0804-4167-884f-ea552dd0ef7a","Type":"ContainerDied","Data":"c3ebdca78537f2f81cc0ce3ef42057c5de02346e27e341dae62a75b6dacbd248"} Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.324311 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3b76a2e-0804-4167-884f-ea552dd0ef7a","Type":"ContainerDied","Data":"3cbcdf001c88602447a5eb8f23cd03a3027990538a752865144a084872773fc1"} Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.408269 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.440662 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-config-data-custom\") pod \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.440717 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-combined-ca-bundle\") pod \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.440882 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcljm\" (UniqueName: \"kubernetes.io/projected/e3b76a2e-0804-4167-884f-ea552dd0ef7a-kube-api-access-bcljm\") pod \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.440940 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3b76a2e-0804-4167-884f-ea552dd0ef7a-etc-machine-id\") pod \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.441015 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-scripts\") pod \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.441065 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-config-data\") pod \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\" (UID: \"e3b76a2e-0804-4167-884f-ea552dd0ef7a\") " Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.445645 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b76a2e-0804-4167-884f-ea552dd0ef7a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e3b76a2e-0804-4167-884f-ea552dd0ef7a" (UID: "e3b76a2e-0804-4167-884f-ea552dd0ef7a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.464428 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e3b76a2e-0804-4167-884f-ea552dd0ef7a" (UID: "e3b76a2e-0804-4167-884f-ea552dd0ef7a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.483246 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-scripts" (OuterVolumeSpecName: "scripts") pod "e3b76a2e-0804-4167-884f-ea552dd0ef7a" (UID: "e3b76a2e-0804-4167-884f-ea552dd0ef7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.502807 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b76a2e-0804-4167-884f-ea552dd0ef7a-kube-api-access-bcljm" (OuterVolumeSpecName: "kube-api-access-bcljm") pod "e3b76a2e-0804-4167-884f-ea552dd0ef7a" (UID: "e3b76a2e-0804-4167-884f-ea552dd0ef7a"). InnerVolumeSpecName "kube-api-access-bcljm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.545420 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcljm\" (UniqueName: \"kubernetes.io/projected/e3b76a2e-0804-4167-884f-ea552dd0ef7a-kube-api-access-bcljm\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.545466 4756 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3b76a2e-0804-4167-884f-ea552dd0ef7a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.545476 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.545486 4756 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.639766 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3b76a2e-0804-4167-884f-ea552dd0ef7a" (UID: "e3b76a2e-0804-4167-884f-ea552dd0ef7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.647209 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.687759 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-config-data" (OuterVolumeSpecName: "config-data") pod "e3b76a2e-0804-4167-884f-ea552dd0ef7a" (UID: "e3b76a2e-0804-4167-884f-ea552dd0ef7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:23 crc kubenswrapper[4756]: I1203 11:15:23.749821 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b76a2e-0804-4167-884f-ea552dd0ef7a-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.318805 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e3b76a2e-0804-4167-884f-ea552dd0ef7a","Type":"ContainerDied","Data":"a69838fe55753c013d958ce4e92b951be9660fb39f86bb81467ec8a0e7b823ac"} Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.318893 4756 scope.go:117] "RemoveContainer" containerID="c3ebdca78537f2f81cc0ce3ef42057c5de02346e27e341dae62a75b6dacbd248" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.318889 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.356618 4756 scope.go:117] "RemoveContainer" containerID="3cbcdf001c88602447a5eb8f23cd03a3027990538a752865144a084872773fc1" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.377328 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.398867 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.412033 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 11:15:24 crc kubenswrapper[4756]: E1203 11:15:24.412589 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2990fd3a-f317-4628-a783-6238277ff18f" containerName="neutron-httpd" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.412626 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2990fd3a-f317-4628-a783-6238277ff18f" containerName="neutron-httpd" Dec 03 11:15:24 crc kubenswrapper[4756]: E1203 11:15:24.412635 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2990fd3a-f317-4628-a783-6238277ff18f" containerName="neutron-api" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.412641 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2990fd3a-f317-4628-a783-6238277ff18f" containerName="neutron-api" Dec 03 11:15:24 crc kubenswrapper[4756]: E1203 11:15:24.412657 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b76a2e-0804-4167-884f-ea552dd0ef7a" containerName="probe" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.412664 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b76a2e-0804-4167-884f-ea552dd0ef7a" containerName="probe" Dec 03 11:15:24 crc kubenswrapper[4756]: E1203 11:15:24.412701 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17d9ad0-25f3-4e48-916a-814feb88ee3a" containerName="init" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.412707 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17d9ad0-25f3-4e48-916a-814feb88ee3a" containerName="init" Dec 03 11:15:24 crc kubenswrapper[4756]: E1203 11:15:24.412719 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17d9ad0-25f3-4e48-916a-814feb88ee3a" containerName="dnsmasq-dns" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.412725 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17d9ad0-25f3-4e48-916a-814feb88ee3a" containerName="dnsmasq-dns" Dec 03 11:15:24 crc kubenswrapper[4756]: E1203 11:15:24.412739 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b76a2e-0804-4167-884f-ea552dd0ef7a" containerName="cinder-scheduler" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.412745 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b76a2e-0804-4167-884f-ea552dd0ef7a" containerName="cinder-scheduler" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.413003 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="d17d9ad0-25f3-4e48-916a-814feb88ee3a" containerName="dnsmasq-dns" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.413049 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2990fd3a-f317-4628-a783-6238277ff18f" containerName="neutron-httpd" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.413068 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b76a2e-0804-4167-884f-ea552dd0ef7a" containerName="cinder-scheduler" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.413081 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2990fd3a-f317-4628-a783-6238277ff18f" containerName="neutron-api" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.413092 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b76a2e-0804-4167-884f-ea552dd0ef7a" containerName="probe" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.414292 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.419210 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.432843 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.467674 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.467769 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.467806 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.467835 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.467915 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.468079 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6psmk\" (UniqueName: \"kubernetes.io/projected/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-kube-api-access-6psmk\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.568714 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7c6566fd84-f6lrg" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.570222 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.570311 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.570392 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.570477 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.570558 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6psmk\" (UniqueName: \"kubernetes.io/projected/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-kube-api-access-6psmk\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.570744 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.570550 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.576608 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.576856 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.576994 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-config-data\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.593595 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-scripts\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.595874 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6psmk\" (UniqueName: \"kubernetes.io/projected/ea68c0aa-0cf1-4437-ad1a-988b44fe4032-kube-api-access-6psmk\") pod \"cinder-scheduler-0\" (UID: \"ea68c0aa-0cf1-4437-ad1a-988b44fe4032\") " pod="openstack/cinder-scheduler-0" Dec 03 11:15:24 crc kubenswrapper[4756]: I1203 11:15:24.752435 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 11:15:25 crc kubenswrapper[4756]: I1203 11:15:25.249217 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b76a2e-0804-4167-884f-ea552dd0ef7a" path="/var/lib/kubelet/pods/e3b76a2e-0804-4167-884f-ea552dd0ef7a/volumes" Dec 03 11:15:25 crc kubenswrapper[4756]: I1203 11:15:25.304064 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 11:15:25 crc kubenswrapper[4756]: W1203 11:15:25.304768 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea68c0aa_0cf1_4437_ad1a_988b44fe4032.slice/crio-1f48505d5459f58449a1bcf7ceec6793072da2848a5423ae402187d0955fdd78 WatchSource:0}: Error finding container 1f48505d5459f58449a1bcf7ceec6793072da2848a5423ae402187d0955fdd78: Status 404 returned error can't find the container with id 1f48505d5459f58449a1bcf7ceec6793072da2848a5423ae402187d0955fdd78 Dec 03 11:15:25 crc kubenswrapper[4756]: I1203 11:15:25.343057 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea68c0aa-0cf1-4437-ad1a-988b44fe4032","Type":"ContainerStarted","Data":"1f48505d5459f58449a1bcf7ceec6793072da2848a5423ae402187d0955fdd78"} Dec 03 11:15:25 crc kubenswrapper[4756]: I1203 11:15:25.888089 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85ff748b95-cmmqm" podUID="d17d9ad0-25f3-4e48-916a-814feb88ee3a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.159:5353: i/o timeout" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.048919 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6f87d4bff8-sdtnv" podUID="d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:54076->10.217.0.160:9311: read: connection reset by peer" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.049027 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6f87d4bff8-sdtnv" podUID="d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:54072->10.217.0.160:9311: read: connection reset by peer" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.412322 4756 generic.go:334] "Generic (PLEG): container finished" podID="d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" containerID="52cf40f8eac376ac4390e7a9105c7685055dfea46f395492db5f8f41472d0643" exitCode=0 Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.412796 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f87d4bff8-sdtnv" event={"ID":"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d","Type":"ContainerDied","Data":"52cf40f8eac376ac4390e7a9105c7685055dfea46f395492db5f8f41472d0643"} Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.452208 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea68c0aa-0cf1-4437-ad1a-988b44fe4032","Type":"ContainerStarted","Data":"a0c07adebe871ccb390726c76b79ab09a1e47f5d2c86199f6e274c03d351926a"} Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.704548 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.904990 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-config-data-custom\") pod \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.905623 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-logs\") pod \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.905670 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-combined-ca-bundle\") pod \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.905746 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5hqx\" (UniqueName: \"kubernetes.io/projected/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-kube-api-access-d5hqx\") pod \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.905992 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-config-data\") pod \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\" (UID: \"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d\") " Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.906793 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 03 11:15:26 crc kubenswrapper[4756]: E1203 11:15:26.907242 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" containerName="barbican-api" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.907261 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" containerName="barbican-api" Dec 03 11:15:26 crc kubenswrapper[4756]: E1203 11:15:26.907286 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" containerName="barbican-api-log" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.907293 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" containerName="barbican-api-log" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.907549 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" containerName="barbican-api" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.907583 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" containerName="barbican-api-log" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.909028 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-logs" (OuterVolumeSpecName: "logs") pod "d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" (UID: "d26e7e64-2332-45c9-a67a-dcaa6a43dc5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.912473 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.925631 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.925970 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.926778 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-b7x6v" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.932971 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" (UID: "d26e7e64-2332-45c9-a67a-dcaa6a43dc5d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.933194 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-kube-api-access-d5hqx" (OuterVolumeSpecName: "kube-api-access-d5hqx") pod "d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" (UID: "d26e7e64-2332-45c9-a67a-dcaa6a43dc5d"). InnerVolumeSpecName "kube-api-access-d5hqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:26 crc kubenswrapper[4756]: I1203 11:15:26.957530 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" (UID: "d26e7e64-2332-45c9-a67a-dcaa6a43dc5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.005051 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.008411 4756 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.008465 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.008491 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.008570 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5hqx\" (UniqueName: \"kubernetes.io/projected/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-kube-api-access-d5hqx\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.112113 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8907533b-6dc9-48f9-8938-7089e2c0cbf5-openstack-config-secret\") pod \"openstackclient\" (UID: \"8907533b-6dc9-48f9-8938-7089e2c0cbf5\") " pod="openstack/openstackclient" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.112189 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8907533b-6dc9-48f9-8938-7089e2c0cbf5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8907533b-6dc9-48f9-8938-7089e2c0cbf5\") " pod="openstack/openstackclient" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.112221 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkm2t\" (UniqueName: \"kubernetes.io/projected/8907533b-6dc9-48f9-8938-7089e2c0cbf5-kube-api-access-dkm2t\") pod \"openstackclient\" (UID: \"8907533b-6dc9-48f9-8938-7089e2c0cbf5\") " pod="openstack/openstackclient" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.112260 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8907533b-6dc9-48f9-8938-7089e2c0cbf5-openstack-config\") pod \"openstackclient\" (UID: \"8907533b-6dc9-48f9-8938-7089e2c0cbf5\") " pod="openstack/openstackclient" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.146261 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-config-data" (OuterVolumeSpecName: "config-data") pod "d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" (UID: "d26e7e64-2332-45c9-a67a-dcaa6a43dc5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.220693 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8907533b-6dc9-48f9-8938-7089e2c0cbf5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8907533b-6dc9-48f9-8938-7089e2c0cbf5\") " pod="openstack/openstackclient" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.220819 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkm2t\" (UniqueName: \"kubernetes.io/projected/8907533b-6dc9-48f9-8938-7089e2c0cbf5-kube-api-access-dkm2t\") pod \"openstackclient\" (UID: \"8907533b-6dc9-48f9-8938-7089e2c0cbf5\") " pod="openstack/openstackclient" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.220933 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8907533b-6dc9-48f9-8938-7089e2c0cbf5-openstack-config\") pod \"openstackclient\" (UID: \"8907533b-6dc9-48f9-8938-7089e2c0cbf5\") " pod="openstack/openstackclient" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.221318 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8907533b-6dc9-48f9-8938-7089e2c0cbf5-openstack-config-secret\") pod \"openstackclient\" (UID: \"8907533b-6dc9-48f9-8938-7089e2c0cbf5\") " pod="openstack/openstackclient" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.222575 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.224544 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8907533b-6dc9-48f9-8938-7089e2c0cbf5-openstack-config\") pod \"openstackclient\" (UID: \"8907533b-6dc9-48f9-8938-7089e2c0cbf5\") " pod="openstack/openstackclient" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.227124 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8907533b-6dc9-48f9-8938-7089e2c0cbf5-openstack-config-secret\") pod \"openstackclient\" (UID: \"8907533b-6dc9-48f9-8938-7089e2c0cbf5\") " pod="openstack/openstackclient" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.252070 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8907533b-6dc9-48f9-8938-7089e2c0cbf5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8907533b-6dc9-48f9-8938-7089e2c0cbf5\") " pod="openstack/openstackclient" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.254880 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkm2t\" (UniqueName: \"kubernetes.io/projected/8907533b-6dc9-48f9-8938-7089e2c0cbf5-kube-api-access-dkm2t\") pod \"openstackclient\" (UID: \"8907533b-6dc9-48f9-8938-7089e2c0cbf5\") " pod="openstack/openstackclient" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.436858 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.505303 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f87d4bff8-sdtnv" event={"ID":"d26e7e64-2332-45c9-a67a-dcaa6a43dc5d","Type":"ContainerDied","Data":"649d3a72c8ae4497b188e447f2ca8d2ab6cb27e4aa51326927caea8579ec95b3"} Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.505405 4756 scope.go:117] "RemoveContainer" containerID="52cf40f8eac376ac4390e7a9105c7685055dfea46f395492db5f8f41472d0643" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.505687 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f87d4bff8-sdtnv" Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.547160 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6f87d4bff8-sdtnv"] Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.567840 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6f87d4bff8-sdtnv"] Dec 03 11:15:27 crc kubenswrapper[4756]: I1203 11:15:27.583457 4756 scope.go:117] "RemoveContainer" containerID="21ada5d2ba2f401d2950eb31b7582fac5cec5063620dea08c151f247dca3b386" Dec 03 11:15:28 crc kubenswrapper[4756]: I1203 11:15:28.075169 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 03 11:15:28 crc kubenswrapper[4756]: I1203 11:15:28.519158 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8907533b-6dc9-48f9-8938-7089e2c0cbf5","Type":"ContainerStarted","Data":"5a3978f6376d06d31b93a1471827511aeabe7a7a85aa63a17279024ea28066d4"} Dec 03 11:15:28 crc kubenswrapper[4756]: I1203 11:15:28.523001 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ea68c0aa-0cf1-4437-ad1a-988b44fe4032","Type":"ContainerStarted","Data":"480e0bba97cf9f9621302271e15d0a0fe59d14158cc3696164dd80f829bc5514"} Dec 03 11:15:28 crc kubenswrapper[4756]: I1203 11:15:28.548528 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.548499889 podStartE2EDuration="4.548499889s" podCreationTimestamp="2025-12-03 11:15:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:15:28.54276963 +0000 UTC m=+1339.572770884" watchObservedRunningTime="2025-12-03 11:15:28.548499889 +0000 UTC m=+1339.578501133" Dec 03 11:15:29 crc kubenswrapper[4756]: I1203 11:15:29.247822 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d26e7e64-2332-45c9-a67a-dcaa6a43dc5d" path="/var/lib/kubelet/pods/d26e7e64-2332-45c9-a67a-dcaa6a43dc5d/volumes" Dec 03 11:15:29 crc kubenswrapper[4756]: I1203 11:15:29.755564 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 03 11:15:30 crc kubenswrapper[4756]: I1203 11:15:30.036863 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.330289 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-cdcf55d99-k7bmw"] Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.333050 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.341513 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.342455 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.342617 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.353055 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-cdcf55d99-k7bmw"] Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.459790 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f962509-8bee-4b75-a51f-f517ffa88908-public-tls-certs\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.459986 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f962509-8bee-4b75-a51f-f517ffa88908-run-httpd\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.460432 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f962509-8bee-4b75-a51f-f517ffa88908-log-httpd\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.460597 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f962509-8bee-4b75-a51f-f517ffa88908-config-data\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.460670 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2f962509-8bee-4b75-a51f-f517ffa88908-etc-swift\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.460790 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpb6l\" (UniqueName: \"kubernetes.io/projected/2f962509-8bee-4b75-a51f-f517ffa88908-kube-api-access-dpb6l\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.460880 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f962509-8bee-4b75-a51f-f517ffa88908-combined-ca-bundle\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.460909 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f962509-8bee-4b75-a51f-f517ffa88908-internal-tls-certs\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.562901 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f962509-8bee-4b75-a51f-f517ffa88908-config-data\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.563072 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2f962509-8bee-4b75-a51f-f517ffa88908-etc-swift\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.563175 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpb6l\" (UniqueName: \"kubernetes.io/projected/2f962509-8bee-4b75-a51f-f517ffa88908-kube-api-access-dpb6l\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.563246 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f962509-8bee-4b75-a51f-f517ffa88908-combined-ca-bundle\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.563285 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f962509-8bee-4b75-a51f-f517ffa88908-internal-tls-certs\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.563333 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f962509-8bee-4b75-a51f-f517ffa88908-public-tls-certs\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.563435 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f962509-8bee-4b75-a51f-f517ffa88908-run-httpd\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.563519 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f962509-8bee-4b75-a51f-f517ffa88908-log-httpd\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.564553 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f962509-8bee-4b75-a51f-f517ffa88908-log-httpd\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.565003 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f962509-8bee-4b75-a51f-f517ffa88908-run-httpd\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.572491 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f962509-8bee-4b75-a51f-f517ffa88908-combined-ca-bundle\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.572657 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f962509-8bee-4b75-a51f-f517ffa88908-config-data\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.573257 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f962509-8bee-4b75-a51f-f517ffa88908-public-tls-certs\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.574635 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2f962509-8bee-4b75-a51f-f517ffa88908-etc-swift\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.581415 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f962509-8bee-4b75-a51f-f517ffa88908-internal-tls-certs\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.607195 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpb6l\" (UniqueName: \"kubernetes.io/projected/2f962509-8bee-4b75-a51f-f517ffa88908-kube-api-access-dpb6l\") pod \"swift-proxy-cdcf55d99-k7bmw\" (UID: \"2f962509-8bee-4b75-a51f-f517ffa88908\") " pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:33 crc kubenswrapper[4756]: I1203 11:15:33.724373 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:34 crc kubenswrapper[4756]: I1203 11:15:34.526886 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-cdcf55d99-k7bmw"] Dec 03 11:15:35 crc kubenswrapper[4756]: I1203 11:15:35.089870 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 03 11:15:37 crc kubenswrapper[4756]: I1203 11:15:37.605297 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:15:37 crc kubenswrapper[4756]: I1203 11:15:37.606759 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="sg-core" containerID="cri-o://737eef6f6b2d8c3feec2f27b7ab5bfeaf0b88323d239f00717facba2b2b068d4" gracePeriod=30 Dec 03 11:15:37 crc kubenswrapper[4756]: I1203 11:15:37.606809 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="proxy-httpd" containerID="cri-o://a3ced57096b7d1e13bb21d986d2141a0e6f2a01c890edfebf7ddab79ff29dae4" gracePeriod=30 Dec 03 11:15:37 crc kubenswrapper[4756]: I1203 11:15:37.606859 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="ceilometer-notification-agent" containerID="cri-o://303966d7673f233bd9800d73634f121e60df9d6f2edd473d3c4386937e3c21c9" gracePeriod=30 Dec 03 11:15:37 crc kubenswrapper[4756]: I1203 11:15:37.607310 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="ceilometer-central-agent" containerID="cri-o://f2a32bf79de6ed0e680db33eaa4dfa49db208156ce7b54014ae00ba36fd0aed1" gracePeriod=30 Dec 03 11:15:37 crc kubenswrapper[4756]: I1203 11:15:37.631004 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 03 11:15:38 crc kubenswrapper[4756]: I1203 11:15:38.660524 4756 generic.go:334] "Generic (PLEG): container finished" podID="18785e02-8cf4-49ce-97bf-90543dea446c" containerID="a3ced57096b7d1e13bb21d986d2141a0e6f2a01c890edfebf7ddab79ff29dae4" exitCode=0 Dec 03 11:15:38 crc kubenswrapper[4756]: I1203 11:15:38.660813 4756 generic.go:334] "Generic (PLEG): container finished" podID="18785e02-8cf4-49ce-97bf-90543dea446c" containerID="737eef6f6b2d8c3feec2f27b7ab5bfeaf0b88323d239f00717facba2b2b068d4" exitCode=2 Dec 03 11:15:38 crc kubenswrapper[4756]: I1203 11:15:38.660822 4756 generic.go:334] "Generic (PLEG): container finished" podID="18785e02-8cf4-49ce-97bf-90543dea446c" containerID="f2a32bf79de6ed0e680db33eaa4dfa49db208156ce7b54014ae00ba36fd0aed1" exitCode=0 Dec 03 11:15:38 crc kubenswrapper[4756]: I1203 11:15:38.660622 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18785e02-8cf4-49ce-97bf-90543dea446c","Type":"ContainerDied","Data":"a3ced57096b7d1e13bb21d986d2141a0e6f2a01c890edfebf7ddab79ff29dae4"} Dec 03 11:15:38 crc kubenswrapper[4756]: I1203 11:15:38.660864 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18785e02-8cf4-49ce-97bf-90543dea446c","Type":"ContainerDied","Data":"737eef6f6b2d8c3feec2f27b7ab5bfeaf0b88323d239f00717facba2b2b068d4"} Dec 03 11:15:38 crc kubenswrapper[4756]: I1203 11:15:38.660880 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18785e02-8cf4-49ce-97bf-90543dea446c","Type":"ContainerDied","Data":"f2a32bf79de6ed0e680db33eaa4dfa49db208156ce7b54014ae00ba36fd0aed1"} Dec 03 11:15:39 crc kubenswrapper[4756]: I1203 11:15:39.676913 4756 generic.go:334] "Generic (PLEG): container finished" podID="18785e02-8cf4-49ce-97bf-90543dea446c" containerID="303966d7673f233bd9800d73634f121e60df9d6f2edd473d3c4386937e3c21c9" exitCode=0 Dec 03 11:15:39 crc kubenswrapper[4756]: I1203 11:15:39.677003 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18785e02-8cf4-49ce-97bf-90543dea446c","Type":"ContainerDied","Data":"303966d7673f233bd9800d73634f121e60df9d6f2edd473d3c4386937e3c21c9"} Dec 03 11:15:40 crc kubenswrapper[4756]: I1203 11:15:40.704112 4756 generic.go:334] "Generic (PLEG): container finished" podID="00c35a0d-70b4-453d-974a-85b638505280" containerID="7b757dbbd271bf889135a4e68662a58696ae5a3097ea2a472cbfee50d00094ed" exitCode=137 Dec 03 11:15:40 crc kubenswrapper[4756]: I1203 11:15:40.704704 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bc647888-tcn4m" event={"ID":"00c35a0d-70b4-453d-974a-85b638505280","Type":"ContainerDied","Data":"7b757dbbd271bf889135a4e68662a58696ae5a3097ea2a472cbfee50d00094ed"} Dec 03 11:15:42 crc kubenswrapper[4756]: I1203 11:15:42.166020 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:15:42 crc kubenswrapper[4756]: I1203 11:15:42.166315 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="069bebae-8f44-4248-9a18-5d1228c32cf2" containerName="glance-log" containerID="cri-o://d4c8f09b515eb62bf4ec2daadbc55c6db14607f89fad07e890899c860ff31e1c" gracePeriod=30 Dec 03 11:15:42 crc kubenswrapper[4756]: I1203 11:15:42.166643 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="069bebae-8f44-4248-9a18-5d1228c32cf2" containerName="glance-httpd" containerID="cri-o://4dd4714faa7b8bce20126f15df29eab17b958ed87dc2f18b4fb516cc286970be" gracePeriod=30 Dec 03 11:15:42 crc kubenswrapper[4756]: I1203 11:15:42.729603 4756 generic.go:334] "Generic (PLEG): container finished" podID="069bebae-8f44-4248-9a18-5d1228c32cf2" containerID="d4c8f09b515eb62bf4ec2daadbc55c6db14607f89fad07e890899c860ff31e1c" exitCode=143 Dec 03 11:15:42 crc kubenswrapper[4756]: I1203 11:15:42.729773 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"069bebae-8f44-4248-9a18-5d1228c32cf2","Type":"ContainerDied","Data":"d4c8f09b515eb62bf4ec2daadbc55c6db14607f89fad07e890899c860ff31e1c"} Dec 03 11:15:42 crc kubenswrapper[4756]: W1203 11:15:42.981811 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f962509_8bee_4b75_a51f_f517ffa88908.slice/crio-cf51eb4c602abf75e72590a3f89d5ffc339a474d75c22cf908c4316798bc0bb3 WatchSource:0}: Error finding container cf51eb4c602abf75e72590a3f89d5ffc339a474d75c22cf908c4316798bc0bb3: Status 404 returned error can't find the container with id cf51eb4c602abf75e72590a3f89d5ffc339a474d75c22cf908c4316798bc0bb3 Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.437116 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.523249 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18785e02-8cf4-49ce-97bf-90543dea446c-log-httpd\") pod \"18785e02-8cf4-49ce-97bf-90543dea446c\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.523360 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg5lg\" (UniqueName: \"kubernetes.io/projected/18785e02-8cf4-49ce-97bf-90543dea446c-kube-api-access-sg5lg\") pod \"18785e02-8cf4-49ce-97bf-90543dea446c\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.523474 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-scripts\") pod \"18785e02-8cf4-49ce-97bf-90543dea446c\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.523521 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-combined-ca-bundle\") pod \"18785e02-8cf4-49ce-97bf-90543dea446c\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.523637 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-config-data\") pod \"18785e02-8cf4-49ce-97bf-90543dea446c\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.523679 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-sg-core-conf-yaml\") pod \"18785e02-8cf4-49ce-97bf-90543dea446c\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.523718 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18785e02-8cf4-49ce-97bf-90543dea446c-run-httpd\") pod \"18785e02-8cf4-49ce-97bf-90543dea446c\" (UID: \"18785e02-8cf4-49ce-97bf-90543dea446c\") " Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.524683 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18785e02-8cf4-49ce-97bf-90543dea446c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "18785e02-8cf4-49ce-97bf-90543dea446c" (UID: "18785e02-8cf4-49ce-97bf-90543dea446c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.525297 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18785e02-8cf4-49ce-97bf-90543dea446c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "18785e02-8cf4-49ce-97bf-90543dea446c" (UID: "18785e02-8cf4-49ce-97bf-90543dea446c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.538185 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-scripts" (OuterVolumeSpecName: "scripts") pod "18785e02-8cf4-49ce-97bf-90543dea446c" (UID: "18785e02-8cf4-49ce-97bf-90543dea446c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.541202 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18785e02-8cf4-49ce-97bf-90543dea446c-kube-api-access-sg5lg" (OuterVolumeSpecName: "kube-api-access-sg5lg") pod "18785e02-8cf4-49ce-97bf-90543dea446c" (UID: "18785e02-8cf4-49ce-97bf-90543dea446c"). InnerVolumeSpecName "kube-api-access-sg5lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.566351 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "18785e02-8cf4-49ce-97bf-90543dea446c" (UID: "18785e02-8cf4-49ce-97bf-90543dea446c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.626845 4756 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18785e02-8cf4-49ce-97bf-90543dea446c-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.627120 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg5lg\" (UniqueName: \"kubernetes.io/projected/18785e02-8cf4-49ce-97bf-90543dea446c-kube-api-access-sg5lg\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.627138 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.627152 4756 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.627164 4756 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18785e02-8cf4-49ce-97bf-90543dea446c-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.645162 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18785e02-8cf4-49ce-97bf-90543dea446c" (UID: "18785e02-8cf4-49ce-97bf-90543dea446c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.680297 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-config-data" (OuterVolumeSpecName: "config-data") pod "18785e02-8cf4-49ce-97bf-90543dea446c" (UID: "18785e02-8cf4-49ce-97bf-90543dea446c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.729271 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.729314 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18785e02-8cf4-49ce-97bf-90543dea446c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.743987 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.745078 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18785e02-8cf4-49ce-97bf-90543dea446c","Type":"ContainerDied","Data":"7afd4648f0dfd5e279c45cd0291356e82d30c723fc2227f2078bf47370163f0c"} Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.745162 4756 scope.go:117] "RemoveContainer" containerID="a3ced57096b7d1e13bb21d986d2141a0e6f2a01c890edfebf7ddab79ff29dae4" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.752274 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cdcf55d99-k7bmw" event={"ID":"2f962509-8bee-4b75-a51f-f517ffa88908","Type":"ContainerStarted","Data":"9207b9507ab40d51650cf92edcb199d8a44a125ca7f769e9bf9ef4ea97bdcd53"} Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.752330 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cdcf55d99-k7bmw" event={"ID":"2f962509-8bee-4b75-a51f-f517ffa88908","Type":"ContainerStarted","Data":"a61efcb88f7d1f31a6089aad98a0469f3f5799a9ead24fb8b49c6e5827cf3450"} Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.752354 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cdcf55d99-k7bmw" event={"ID":"2f962509-8bee-4b75-a51f-f517ffa88908","Type":"ContainerStarted","Data":"cf51eb4c602abf75e72590a3f89d5ffc339a474d75c22cf908c4316798bc0bb3"} Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.753910 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.753941 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.764615 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66bc647888-tcn4m" event={"ID":"00c35a0d-70b4-453d-974a-85b638505280","Type":"ContainerStarted","Data":"998d481238eb0d77b34ed8a975065cb7ac03d84e3cecb92daf166c6f985712e4"} Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.768183 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8907533b-6dc9-48f9-8938-7089e2c0cbf5","Type":"ContainerStarted","Data":"f622055bfcb42281a47540f97de6b1e5f32dae18acd3489e046c5933c4292bab"} Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.774266 4756 scope.go:117] "RemoveContainer" containerID="737eef6f6b2d8c3feec2f27b7ab5bfeaf0b88323d239f00717facba2b2b068d4" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.812293 4756 scope.go:117] "RemoveContainer" containerID="303966d7673f233bd9800d73634f121e60df9d6f2edd473d3c4386937e3c21c9" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.817669 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-cdcf55d99-k7bmw" podStartSLOduration=10.817642881 podStartE2EDuration="10.817642881s" podCreationTimestamp="2025-12-03 11:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:15:43.786922425 +0000 UTC m=+1354.816923669" watchObservedRunningTime="2025-12-03 11:15:43.817642881 +0000 UTC m=+1354.847644125" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.820548 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.769375512 podStartE2EDuration="17.820538131s" podCreationTimestamp="2025-12-03 11:15:26 +0000 UTC" firstStartedPulling="2025-12-03 11:15:28.09343783 +0000 UTC m=+1339.123439074" lastFinishedPulling="2025-12-03 11:15:43.144600449 +0000 UTC m=+1354.174601693" observedRunningTime="2025-12-03 11:15:43.810196399 +0000 UTC m=+1354.840197633" watchObservedRunningTime="2025-12-03 11:15:43.820538131 +0000 UTC m=+1354.850539375" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.842814 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.845103 4756 scope.go:117] "RemoveContainer" containerID="f2a32bf79de6ed0e680db33eaa4dfa49db208156ce7b54014ae00ba36fd0aed1" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.854903 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.883482 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:15:43 crc kubenswrapper[4756]: E1203 11:15:43.884120 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="proxy-httpd" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.884145 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="proxy-httpd" Dec 03 11:15:43 crc kubenswrapper[4756]: E1203 11:15:43.884196 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="ceilometer-central-agent" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.884205 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="ceilometer-central-agent" Dec 03 11:15:43 crc kubenswrapper[4756]: E1203 11:15:43.884229 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="ceilometer-notification-agent" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.884236 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="ceilometer-notification-agent" Dec 03 11:15:43 crc kubenswrapper[4756]: E1203 11:15:43.884248 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="sg-core" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.884257 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="sg-core" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.884464 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="ceilometer-notification-agent" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.884492 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="ceilometer-central-agent" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.884508 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="proxy-httpd" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.884519 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" containerName="sg-core" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.886561 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.895668 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.906452 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 11:15:43 crc kubenswrapper[4756]: I1203 11:15:43.916701 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.035853 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77240248-19ee-4d0b-9d01-781b90e90ebf-run-httpd\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.035925 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.036120 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77240248-19ee-4d0b-9d01-781b90e90ebf-log-httpd\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.036172 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-scripts\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.036214 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-config-data\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.036241 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrk8\" (UniqueName: \"kubernetes.io/projected/77240248-19ee-4d0b-9d01-781b90e90ebf-kube-api-access-nhrk8\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.036280 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.138751 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.138860 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77240248-19ee-4d0b-9d01-781b90e90ebf-log-httpd\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.138893 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-scripts\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.138928 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-config-data\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.138965 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhrk8\" (UniqueName: \"kubernetes.io/projected/77240248-19ee-4d0b-9d01-781b90e90ebf-kube-api-access-nhrk8\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.139016 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.139084 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77240248-19ee-4d0b-9d01-781b90e90ebf-run-httpd\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.139757 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77240248-19ee-4d0b-9d01-781b90e90ebf-log-httpd\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.139806 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77240248-19ee-4d0b-9d01-781b90e90ebf-run-httpd\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.147248 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-scripts\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.148662 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.149916 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.153977 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-config-data\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.189001 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhrk8\" (UniqueName: \"kubernetes.io/projected/77240248-19ee-4d0b-9d01-781b90e90ebf-kube-api-access-nhrk8\") pod \"ceilometer-0\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: I1203 11:15:44.334651 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:15:44 crc kubenswrapper[4756]: W1203 11:15:44.998695 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77240248_19ee_4d0b_9d01_781b90e90ebf.slice/crio-ed52ac8f2cc02ec837b721e8c48c63df77c127e4cc02c1b8a3ca452774e39e65 WatchSource:0}: Error finding container ed52ac8f2cc02ec837b721e8c48c63df77c127e4cc02c1b8a3ca452774e39e65: Status 404 returned error can't find the container with id ed52ac8f2cc02ec837b721e8c48c63df77c127e4cc02c1b8a3ca452774e39e65 Dec 03 11:15:45 crc kubenswrapper[4756]: I1203 11:15:45.008303 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:15:45 crc kubenswrapper[4756]: I1203 11:15:45.246230 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18785e02-8cf4-49ce-97bf-90543dea446c" path="/var/lib/kubelet/pods/18785e02-8cf4-49ce-97bf-90543dea446c/volumes" Dec 03 11:15:45 crc kubenswrapper[4756]: I1203 11:15:45.794934 4756 generic.go:334] "Generic (PLEG): container finished" podID="069bebae-8f44-4248-9a18-5d1228c32cf2" containerID="4dd4714faa7b8bce20126f15df29eab17b958ed87dc2f18b4fb516cc286970be" exitCode=0 Dec 03 11:15:45 crc kubenswrapper[4756]: I1203 11:15:45.795041 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"069bebae-8f44-4248-9a18-5d1228c32cf2","Type":"ContainerDied","Data":"4dd4714faa7b8bce20126f15df29eab17b958ed87dc2f18b4fb516cc286970be"} Dec 03 11:15:45 crc kubenswrapper[4756]: I1203 11:15:45.800860 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77240248-19ee-4d0b-9d01-781b90e90ebf","Type":"ContainerStarted","Data":"ed52ac8f2cc02ec837b721e8c48c63df77c127e4cc02c1b8a3ca452774e39e65"} Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.605856 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.712507 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"069bebae-8f44-4248-9a18-5d1228c32cf2\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.712863 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-internal-tls-certs\") pod \"069bebae-8f44-4248-9a18-5d1228c32cf2\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.713132 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-config-data\") pod \"069bebae-8f44-4248-9a18-5d1228c32cf2\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.713361 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-scripts\") pod \"069bebae-8f44-4248-9a18-5d1228c32cf2\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.714216 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjxg9\" (UniqueName: \"kubernetes.io/projected/069bebae-8f44-4248-9a18-5d1228c32cf2-kube-api-access-rjxg9\") pod \"069bebae-8f44-4248-9a18-5d1228c32cf2\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.714375 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/069bebae-8f44-4248-9a18-5d1228c32cf2-httpd-run\") pod \"069bebae-8f44-4248-9a18-5d1228c32cf2\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.714553 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-combined-ca-bundle\") pod \"069bebae-8f44-4248-9a18-5d1228c32cf2\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.715243 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/069bebae-8f44-4248-9a18-5d1228c32cf2-logs\") pod \"069bebae-8f44-4248-9a18-5d1228c32cf2\" (UID: \"069bebae-8f44-4248-9a18-5d1228c32cf2\") " Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.714881 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/069bebae-8f44-4248-9a18-5d1228c32cf2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "069bebae-8f44-4248-9a18-5d1228c32cf2" (UID: "069bebae-8f44-4248-9a18-5d1228c32cf2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.723415 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/069bebae-8f44-4248-9a18-5d1228c32cf2-logs" (OuterVolumeSpecName: "logs") pod "069bebae-8f44-4248-9a18-5d1228c32cf2" (UID: "069bebae-8f44-4248-9a18-5d1228c32cf2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.724215 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069bebae-8f44-4248-9a18-5d1228c32cf2-kube-api-access-rjxg9" (OuterVolumeSpecName: "kube-api-access-rjxg9") pod "069bebae-8f44-4248-9a18-5d1228c32cf2" (UID: "069bebae-8f44-4248-9a18-5d1228c32cf2"). InnerVolumeSpecName "kube-api-access-rjxg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.736709 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "069bebae-8f44-4248-9a18-5d1228c32cf2" (UID: "069bebae-8f44-4248-9a18-5d1228c32cf2"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.740564 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-scripts" (OuterVolumeSpecName: "scripts") pod "069bebae-8f44-4248-9a18-5d1228c32cf2" (UID: "069bebae-8f44-4248-9a18-5d1228c32cf2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.761760 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "069bebae-8f44-4248-9a18-5d1228c32cf2" (UID: "069bebae-8f44-4248-9a18-5d1228c32cf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.809815 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-config-data" (OuterVolumeSpecName: "config-data") pod "069bebae-8f44-4248-9a18-5d1228c32cf2" (UID: "069bebae-8f44-4248-9a18-5d1228c32cf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.817941 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.818065 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.818075 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjxg9\" (UniqueName: \"kubernetes.io/projected/069bebae-8f44-4248-9a18-5d1228c32cf2-kube-api-access-rjxg9\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.818089 4756 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/069bebae-8f44-4248-9a18-5d1228c32cf2-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.818099 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.818108 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/069bebae-8f44-4248-9a18-5d1228c32cf2-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.818133 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.819037 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"069bebae-8f44-4248-9a18-5d1228c32cf2","Type":"ContainerDied","Data":"469674dd1f87e0360358ab7d7342491adc36e0c5a3357c143ba8dd650233da2a"} Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.819185 4756 scope.go:117] "RemoveContainer" containerID="4dd4714faa7b8bce20126f15df29eab17b958ed87dc2f18b4fb516cc286970be" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.819297 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.819167 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "069bebae-8f44-4248-9a18-5d1228c32cf2" (UID: "069bebae-8f44-4248-9a18-5d1228c32cf2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.917126 4756 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.933337 4756 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.933801 4756 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/069bebae-8f44-4248-9a18-5d1228c32cf2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:46 crc kubenswrapper[4756]: I1203 11:15:46.989276 4756 scope.go:117] "RemoveContainer" containerID="d4c8f09b515eb62bf4ec2daadbc55c6db14607f89fad07e890899c860ff31e1c" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.158727 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.166860 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.189062 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:15:47 crc kubenswrapper[4756]: E1203 11:15:47.189493 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069bebae-8f44-4248-9a18-5d1228c32cf2" containerName="glance-log" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.189512 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="069bebae-8f44-4248-9a18-5d1228c32cf2" containerName="glance-log" Dec 03 11:15:47 crc kubenswrapper[4756]: E1203 11:15:47.189536 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069bebae-8f44-4248-9a18-5d1228c32cf2" containerName="glance-httpd" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.189544 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="069bebae-8f44-4248-9a18-5d1228c32cf2" containerName="glance-httpd" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.189712 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="069bebae-8f44-4248-9a18-5d1228c32cf2" containerName="glance-httpd" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.189730 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="069bebae-8f44-4248-9a18-5d1228c32cf2" containerName="glance-log" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.190715 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.193602 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.193606 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.210710 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.238550 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.238591 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/543ab57c-507b-4266-9105-e7e09e254311-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.238654 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/543ab57c-507b-4266-9105-e7e09e254311-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.238755 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/543ab57c-507b-4266-9105-e7e09e254311-scripts\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.238792 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/543ab57c-507b-4266-9105-e7e09e254311-config-data\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.238815 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543ab57c-507b-4266-9105-e7e09e254311-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.238851 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/543ab57c-507b-4266-9105-e7e09e254311-logs\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.238893 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qv7f\" (UniqueName: \"kubernetes.io/projected/543ab57c-507b-4266-9105-e7e09e254311-kube-api-access-8qv7f\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.248023 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="069bebae-8f44-4248-9a18-5d1228c32cf2" path="/var/lib/kubelet/pods/069bebae-8f44-4248-9a18-5d1228c32cf2/volumes" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.341037 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/543ab57c-507b-4266-9105-e7e09e254311-logs\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.341332 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qv7f\" (UniqueName: \"kubernetes.io/projected/543ab57c-507b-4266-9105-e7e09e254311-kube-api-access-8qv7f\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.341367 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.341400 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/543ab57c-507b-4266-9105-e7e09e254311-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.341453 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/543ab57c-507b-4266-9105-e7e09e254311-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.341553 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/543ab57c-507b-4266-9105-e7e09e254311-scripts\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.341599 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/543ab57c-507b-4266-9105-e7e09e254311-config-data\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.341640 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543ab57c-507b-4266-9105-e7e09e254311-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.341985 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/543ab57c-507b-4266-9105-e7e09e254311-logs\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.342079 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/543ab57c-507b-4266-9105-e7e09e254311-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.342433 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.349787 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/543ab57c-507b-4266-9105-e7e09e254311-config-data\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.351351 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/543ab57c-507b-4266-9105-e7e09e254311-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.353730 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/543ab57c-507b-4266-9105-e7e09e254311-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.358634 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/543ab57c-507b-4266-9105-e7e09e254311-scripts\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.367604 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qv7f\" (UniqueName: \"kubernetes.io/projected/543ab57c-507b-4266-9105-e7e09e254311-kube-api-access-8qv7f\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.401577 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"543ab57c-507b-4266-9105-e7e09e254311\") " pod="openstack/glance-default-internal-api-0" Dec 03 11:15:47 crc kubenswrapper[4756]: I1203 11:15:47.519430 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:48 crc kubenswrapper[4756]: I1203 11:15:48.762736 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:48 crc kubenswrapper[4756]: I1203 11:15:48.766054 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-cdcf55d99-k7bmw" Dec 03 11:15:48 crc kubenswrapper[4756]: I1203 11:15:48.892477 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 11:15:48 crc kubenswrapper[4756]: W1203 11:15:48.908052 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod543ab57c_507b_4266_9105_e7e09e254311.slice/crio-be258935e848fa60ab0d2a5ff6740d7d934981273912db7aed3eb1deb2665c09 WatchSource:0}: Error finding container be258935e848fa60ab0d2a5ff6740d7d934981273912db7aed3eb1deb2665c09: Status 404 returned error can't find the container with id be258935e848fa60ab0d2a5ff6740d7d934981273912db7aed3eb1deb2665c09 Dec 03 11:15:49 crc kubenswrapper[4756]: I1203 11:15:49.871543 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"543ab57c-507b-4266-9105-e7e09e254311","Type":"ContainerStarted","Data":"be258935e848fa60ab0d2a5ff6740d7d934981273912db7aed3eb1deb2665c09"} Dec 03 11:15:49 crc kubenswrapper[4756]: I1203 11:15:49.874235 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77240248-19ee-4d0b-9d01-781b90e90ebf","Type":"ContainerStarted","Data":"1ddca1103b180640bb50d8052ca04d379078a1ae93df9f905176eb70a9e85f3f"} Dec 03 11:15:50 crc kubenswrapper[4756]: I1203 11:15:50.247312 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:15:50 crc kubenswrapper[4756]: I1203 11:15:50.247432 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:15:50 crc kubenswrapper[4756]: I1203 11:15:50.850494 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-lgdzl"] Dec 03 11:15:50 crc kubenswrapper[4756]: I1203 11:15:50.852629 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lgdzl" Dec 03 11:15:50 crc kubenswrapper[4756]: I1203 11:15:50.932118 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkhrp\" (UniqueName: \"kubernetes.io/projected/a00eb590-84e6-4087-b370-226af97b869a-kube-api-access-bkhrp\") pod \"nova-api-db-create-lgdzl\" (UID: \"a00eb590-84e6-4087-b370-226af97b869a\") " pod="openstack/nova-api-db-create-lgdzl" Dec 03 11:15:50 crc kubenswrapper[4756]: I1203 11:15:50.932226 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00eb590-84e6-4087-b370-226af97b869a-operator-scripts\") pod \"nova-api-db-create-lgdzl\" (UID: \"a00eb590-84e6-4087-b370-226af97b869a\") " pod="openstack/nova-api-db-create-lgdzl" Dec 03 11:15:50 crc kubenswrapper[4756]: I1203 11:15:50.959814 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lgdzl"] Dec 03 11:15:50 crc kubenswrapper[4756]: I1203 11:15:50.988355 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"543ab57c-507b-4266-9105-e7e09e254311","Type":"ContainerStarted","Data":"2bc5553adbe9fddc9480287cab364949d91347b8f85d1c4761ec505d93f7d45e"} Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.012998 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-8fddf"] Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.014527 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8fddf" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.034425 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00eb590-84e6-4087-b370-226af97b869a-operator-scripts\") pod \"nova-api-db-create-lgdzl\" (UID: \"a00eb590-84e6-4087-b370-226af97b869a\") " pod="openstack/nova-api-db-create-lgdzl" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.034548 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkhrp\" (UniqueName: \"kubernetes.io/projected/a00eb590-84e6-4087-b370-226af97b869a-kube-api-access-bkhrp\") pod \"nova-api-db-create-lgdzl\" (UID: \"a00eb590-84e6-4087-b370-226af97b869a\") " pod="openstack/nova-api-db-create-lgdzl" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.034596 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bf87416-9370-4447-8d8d-5d303f5126ae-operator-scripts\") pod \"nova-cell0-db-create-8fddf\" (UID: \"2bf87416-9370-4447-8d8d-5d303f5126ae\") " pod="openstack/nova-cell0-db-create-8fddf" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.034640 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncfjc\" (UniqueName: \"kubernetes.io/projected/2bf87416-9370-4447-8d8d-5d303f5126ae-kube-api-access-ncfjc\") pod \"nova-cell0-db-create-8fddf\" (UID: \"2bf87416-9370-4447-8d8d-5d303f5126ae\") " pod="openstack/nova-cell0-db-create-8fddf" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.036485 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00eb590-84e6-4087-b370-226af97b869a-operator-scripts\") pod \"nova-api-db-create-lgdzl\" (UID: \"a00eb590-84e6-4087-b370-226af97b869a\") " pod="openstack/nova-api-db-create-lgdzl" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.084422 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8fddf"] Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.162496 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bf87416-9370-4447-8d8d-5d303f5126ae-operator-scripts\") pod \"nova-cell0-db-create-8fddf\" (UID: \"2bf87416-9370-4447-8d8d-5d303f5126ae\") " pod="openstack/nova-cell0-db-create-8fddf" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.162716 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncfjc\" (UniqueName: \"kubernetes.io/projected/2bf87416-9370-4447-8d8d-5d303f5126ae-kube-api-access-ncfjc\") pod \"nova-cell0-db-create-8fddf\" (UID: \"2bf87416-9370-4447-8d8d-5d303f5126ae\") " pod="openstack/nova-cell0-db-create-8fddf" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.171712 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkhrp\" (UniqueName: \"kubernetes.io/projected/a00eb590-84e6-4087-b370-226af97b869a-kube-api-access-bkhrp\") pod \"nova-api-db-create-lgdzl\" (UID: \"a00eb590-84e6-4087-b370-226af97b869a\") " pod="openstack/nova-api-db-create-lgdzl" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.187646 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bf87416-9370-4447-8d8d-5d303f5126ae-operator-scripts\") pod \"nova-cell0-db-create-8fddf\" (UID: \"2bf87416-9370-4447-8d8d-5d303f5126ae\") " pod="openstack/nova-cell0-db-create-8fddf" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.232726 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lgdzl" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.233734 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncfjc\" (UniqueName: \"kubernetes.io/projected/2bf87416-9370-4447-8d8d-5d303f5126ae-kube-api-access-ncfjc\") pod \"nova-cell0-db-create-8fddf\" (UID: \"2bf87416-9370-4447-8d8d-5d303f5126ae\") " pod="openstack/nova-cell0-db-create-8fddf" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.346309 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-149d-account-create-update-2gcn6"] Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.348700 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-149d-account-create-update-2gcn6" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.359853 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.397382 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-149d-account-create-update-2gcn6"] Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.435046 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-8p42c"] Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.437159 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8p42c" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.453760 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8fddf" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.459946 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8p42c"] Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.476718 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz45g\" (UniqueName: \"kubernetes.io/projected/875a8b1b-844e-435e-8656-4fbef59b74af-kube-api-access-wz45g\") pod \"nova-api-149d-account-create-update-2gcn6\" (UID: \"875a8b1b-844e-435e-8656-4fbef59b74af\") " pod="openstack/nova-api-149d-account-create-update-2gcn6" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.476917 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/875a8b1b-844e-435e-8656-4fbef59b74af-operator-scripts\") pod \"nova-api-149d-account-create-update-2gcn6\" (UID: \"875a8b1b-844e-435e-8656-4fbef59b74af\") " pod="openstack/nova-api-149d-account-create-update-2gcn6" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.490641 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.491116 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0507f2c0-f9b3-490f-b2ac-45be22f96c05" containerName="glance-log" containerID="cri-o://55ef2c10a25c455e62cb0a614e6a482155febc18f3aa686f80e8f9e268e7f293" gracePeriod=30 Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.491707 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0507f2c0-f9b3-490f-b2ac-45be22f96c05" containerName="glance-httpd" containerID="cri-o://57ef720a099194ecbf8b5b1c61e2a1bce3f746f846b58554616b2e68920c1550" gracePeriod=30 Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.530944 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-99a3-account-create-update-mlb88"] Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.535079 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-99a3-account-create-update-mlb88" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.539792 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.563031 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-99a3-account-create-update-mlb88"] Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.578813 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4962e497-d702-41df-b7b7-a9a873359aa3-operator-scripts\") pod \"nova-cell1-db-create-8p42c\" (UID: \"4962e497-d702-41df-b7b7-a9a873359aa3\") " pod="openstack/nova-cell1-db-create-8p42c" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.578992 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz45g\" (UniqueName: \"kubernetes.io/projected/875a8b1b-844e-435e-8656-4fbef59b74af-kube-api-access-wz45g\") pod \"nova-api-149d-account-create-update-2gcn6\" (UID: \"875a8b1b-844e-435e-8656-4fbef59b74af\") " pod="openstack/nova-api-149d-account-create-update-2gcn6" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.579051 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c5rg\" (UniqueName: \"kubernetes.io/projected/4962e497-d702-41df-b7b7-a9a873359aa3-kube-api-access-6c5rg\") pod \"nova-cell1-db-create-8p42c\" (UID: \"4962e497-d702-41df-b7b7-a9a873359aa3\") " pod="openstack/nova-cell1-db-create-8p42c" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.579159 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/875a8b1b-844e-435e-8656-4fbef59b74af-operator-scripts\") pod \"nova-api-149d-account-create-update-2gcn6\" (UID: \"875a8b1b-844e-435e-8656-4fbef59b74af\") " pod="openstack/nova-api-149d-account-create-update-2gcn6" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.581388 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/875a8b1b-844e-435e-8656-4fbef59b74af-operator-scripts\") pod \"nova-api-149d-account-create-update-2gcn6\" (UID: \"875a8b1b-844e-435e-8656-4fbef59b74af\") " pod="openstack/nova-api-149d-account-create-update-2gcn6" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.588387 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-aad7-account-create-update-9hdq8"] Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.589839 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.598484 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.615945 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-aad7-account-create-update-9hdq8"] Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.639168 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz45g\" (UniqueName: \"kubernetes.io/projected/875a8b1b-844e-435e-8656-4fbef59b74af-kube-api-access-wz45g\") pod \"nova-api-149d-account-create-update-2gcn6\" (UID: \"875a8b1b-844e-435e-8656-4fbef59b74af\") " pod="openstack/nova-api-149d-account-create-update-2gcn6" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.681142 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c5rg\" (UniqueName: \"kubernetes.io/projected/4962e497-d702-41df-b7b7-a9a873359aa3-kube-api-access-6c5rg\") pod \"nova-cell1-db-create-8p42c\" (UID: \"4962e497-d702-41df-b7b7-a9a873359aa3\") " pod="openstack/nova-cell1-db-create-8p42c" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.681233 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a9719b-6d2e-4f77-8a55-0bfb9b29d29a-operator-scripts\") pod \"nova-cell1-99a3-account-create-update-mlb88\" (UID: \"59a9719b-6d2e-4f77-8a55-0bfb9b29d29a\") " pod="openstack/nova-cell1-99a3-account-create-update-mlb88" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.681271 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c-operator-scripts\") pod \"nova-cell0-aad7-account-create-update-9hdq8\" (UID: \"a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c\") " pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.681321 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgcd8\" (UniqueName: \"kubernetes.io/projected/59a9719b-6d2e-4f77-8a55-0bfb9b29d29a-kube-api-access-kgcd8\") pod \"nova-cell1-99a3-account-create-update-mlb88\" (UID: \"59a9719b-6d2e-4f77-8a55-0bfb9b29d29a\") " pod="openstack/nova-cell1-99a3-account-create-update-mlb88" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.681380 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpj5n\" (UniqueName: \"kubernetes.io/projected/a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c-kube-api-access-fpj5n\") pod \"nova-cell0-aad7-account-create-update-9hdq8\" (UID: \"a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c\") " pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.681448 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4962e497-d702-41df-b7b7-a9a873359aa3-operator-scripts\") pod \"nova-cell1-db-create-8p42c\" (UID: \"4962e497-d702-41df-b7b7-a9a873359aa3\") " pod="openstack/nova-cell1-db-create-8p42c" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.682028 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-149d-account-create-update-2gcn6" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.682684 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4962e497-d702-41df-b7b7-a9a873359aa3-operator-scripts\") pod \"nova-cell1-db-create-8p42c\" (UID: \"4962e497-d702-41df-b7b7-a9a873359aa3\") " pod="openstack/nova-cell1-db-create-8p42c" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.734132 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c5rg\" (UniqueName: \"kubernetes.io/projected/4962e497-d702-41df-b7b7-a9a873359aa3-kube-api-access-6c5rg\") pod \"nova-cell1-db-create-8p42c\" (UID: \"4962e497-d702-41df-b7b7-a9a873359aa3\") " pod="openstack/nova-cell1-db-create-8p42c" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.778969 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8p42c" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.786389 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgcd8\" (UniqueName: \"kubernetes.io/projected/59a9719b-6d2e-4f77-8a55-0bfb9b29d29a-kube-api-access-kgcd8\") pod \"nova-cell1-99a3-account-create-update-mlb88\" (UID: \"59a9719b-6d2e-4f77-8a55-0bfb9b29d29a\") " pod="openstack/nova-cell1-99a3-account-create-update-mlb88" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.786542 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpj5n\" (UniqueName: \"kubernetes.io/projected/a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c-kube-api-access-fpj5n\") pod \"nova-cell0-aad7-account-create-update-9hdq8\" (UID: \"a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c\") " pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.786848 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a9719b-6d2e-4f77-8a55-0bfb9b29d29a-operator-scripts\") pod \"nova-cell1-99a3-account-create-update-mlb88\" (UID: \"59a9719b-6d2e-4f77-8a55-0bfb9b29d29a\") " pod="openstack/nova-cell1-99a3-account-create-update-mlb88" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.786935 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c-operator-scripts\") pod \"nova-cell0-aad7-account-create-update-9hdq8\" (UID: \"a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c\") " pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.791878 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a9719b-6d2e-4f77-8a55-0bfb9b29d29a-operator-scripts\") pod \"nova-cell1-99a3-account-create-update-mlb88\" (UID: \"59a9719b-6d2e-4f77-8a55-0bfb9b29d29a\") " pod="openstack/nova-cell1-99a3-account-create-update-mlb88" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.792503 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c-operator-scripts\") pod \"nova-cell0-aad7-account-create-update-9hdq8\" (UID: \"a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c\") " pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.810081 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgcd8\" (UniqueName: \"kubernetes.io/projected/59a9719b-6d2e-4f77-8a55-0bfb9b29d29a-kube-api-access-kgcd8\") pod \"nova-cell1-99a3-account-create-update-mlb88\" (UID: \"59a9719b-6d2e-4f77-8a55-0bfb9b29d29a\") " pod="openstack/nova-cell1-99a3-account-create-update-mlb88" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.828668 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpj5n\" (UniqueName: \"kubernetes.io/projected/a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c-kube-api-access-fpj5n\") pod \"nova-cell0-aad7-account-create-update-9hdq8\" (UID: \"a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c\") " pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.900313 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-99a3-account-create-update-mlb88" Dec 03 11:15:51 crc kubenswrapper[4756]: I1203 11:15:51.964539 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" Dec 03 11:15:52 crc kubenswrapper[4756]: I1203 11:15:52.083784 4756 generic.go:334] "Generic (PLEG): container finished" podID="0507f2c0-f9b3-490f-b2ac-45be22f96c05" containerID="55ef2c10a25c455e62cb0a614e6a482155febc18f3aa686f80e8f9e268e7f293" exitCode=143 Dec 03 11:15:52 crc kubenswrapper[4756]: I1203 11:15:52.083875 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0507f2c0-f9b3-490f-b2ac-45be22f96c05","Type":"ContainerDied","Data":"55ef2c10a25c455e62cb0a614e6a482155febc18f3aa686f80e8f9e268e7f293"} Dec 03 11:15:52 crc kubenswrapper[4756]: I1203 11:15:52.165915 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lgdzl"] Dec 03 11:15:52 crc kubenswrapper[4756]: I1203 11:15:52.246879 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8fddf"] Dec 03 11:15:52 crc kubenswrapper[4756]: W1203 11:15:52.272623 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bf87416_9370_4447_8d8d_5d303f5126ae.slice/crio-e1fd7f4d586e3fcf85b501e8af5e0204dd83cb252ba6cc2ae7723f9c97e6b880 WatchSource:0}: Error finding container e1fd7f4d586e3fcf85b501e8af5e0204dd83cb252ba6cc2ae7723f9c97e6b880: Status 404 returned error can't find the container with id e1fd7f4d586e3fcf85b501e8af5e0204dd83cb252ba6cc2ae7723f9c97e6b880 Dec 03 11:15:52 crc kubenswrapper[4756]: I1203 11:15:52.455838 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-149d-account-create-update-2gcn6"] Dec 03 11:15:52 crc kubenswrapper[4756]: I1203 11:15:52.607733 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:15:52 crc kubenswrapper[4756]: I1203 11:15:52.607814 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:15:52 crc kubenswrapper[4756]: I1203 11:15:52.634654 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8p42c"] Dec 03 11:15:52 crc kubenswrapper[4756]: I1203 11:15:52.645816 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-99a3-account-create-update-mlb88"] Dec 03 11:15:52 crc kubenswrapper[4756]: I1203 11:15:52.660781 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-aad7-account-create-update-9hdq8"] Dec 03 11:15:52 crc kubenswrapper[4756]: W1203 11:15:52.756762 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod875a8b1b_844e_435e_8656_4fbef59b74af.slice/crio-aa359aeefefdd33834e910f76ddf9d629825cde0e36a6d28027abc032a229c78 WatchSource:0}: Error finding container aa359aeefefdd33834e910f76ddf9d629825cde0e36a6d28027abc032a229c78: Status 404 returned error can't find the container with id aa359aeefefdd33834e910f76ddf9d629825cde0e36a6d28027abc032a229c78 Dec 03 11:15:52 crc kubenswrapper[4756]: W1203 11:15:52.763196 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4962e497_d702_41df_b7b7_a9a873359aa3.slice/crio-e4809e7721150437f9f762e4911677545262d233c63a29f54c3681789e3e7b59 WatchSource:0}: Error finding container e4809e7721150437f9f762e4911677545262d233c63a29f54c3681789e3e7b59: Status 404 returned error can't find the container with id e4809e7721150437f9f762e4911677545262d233c63a29f54c3681789e3e7b59 Dec 03 11:15:52 crc kubenswrapper[4756]: W1203 11:15:52.763747 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59a9719b_6d2e_4f77_8a55_0bfb9b29d29a.slice/crio-f66bfa0e1a94ea2b0f28e9c1bbd7241939d69e268c15270ee9633ccfb390ac08 WatchSource:0}: Error finding container f66bfa0e1a94ea2b0f28e9c1bbd7241939d69e268c15270ee9633ccfb390ac08: Status 404 returned error can't find the container with id f66bfa0e1a94ea2b0f28e9c1bbd7241939d69e268c15270ee9633ccfb390ac08 Dec 03 11:15:52 crc kubenswrapper[4756]: W1203 11:15:52.767895 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3e4fb9a_1e78_4c8a_b855_6d1bb5c0f38c.slice/crio-101f59ef567a12e76a348ce3fb79f2aa4a0486ba28af5aabc6dc902d0959b490 WatchSource:0}: Error finding container 101f59ef567a12e76a348ce3fb79f2aa4a0486ba28af5aabc6dc902d0959b490: Status 404 returned error can't find the container with id 101f59ef567a12e76a348ce3fb79f2aa4a0486ba28af5aabc6dc902d0959b490 Dec 03 11:15:53 crc kubenswrapper[4756]: I1203 11:15:53.100756 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8fddf" event={"ID":"2bf87416-9370-4447-8d8d-5d303f5126ae","Type":"ContainerStarted","Data":"e1fd7f4d586e3fcf85b501e8af5e0204dd83cb252ba6cc2ae7723f9c97e6b880"} Dec 03 11:15:53 crc kubenswrapper[4756]: I1203 11:15:53.105462 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-99a3-account-create-update-mlb88" event={"ID":"59a9719b-6d2e-4f77-8a55-0bfb9b29d29a","Type":"ContainerStarted","Data":"f66bfa0e1a94ea2b0f28e9c1bbd7241939d69e268c15270ee9633ccfb390ac08"} Dec 03 11:15:53 crc kubenswrapper[4756]: I1203 11:15:53.107607 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lgdzl" event={"ID":"a00eb590-84e6-4087-b370-226af97b869a","Type":"ContainerStarted","Data":"213f1698b118644b46fbd038ef1d9c7beb6d3e307e94aa222b8381c836b614fd"} Dec 03 11:15:53 crc kubenswrapper[4756]: I1203 11:15:53.115118 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"543ab57c-507b-4266-9105-e7e09e254311","Type":"ContainerStarted","Data":"0bbd684c152fddddabdbb3e01b8ac258046048edf2d102d0762c70e9f182e8b4"} Dec 03 11:15:53 crc kubenswrapper[4756]: I1203 11:15:53.121337 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" event={"ID":"a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c","Type":"ContainerStarted","Data":"101f59ef567a12e76a348ce3fb79f2aa4a0486ba28af5aabc6dc902d0959b490"} Dec 03 11:15:53 crc kubenswrapper[4756]: I1203 11:15:53.123709 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-149d-account-create-update-2gcn6" event={"ID":"875a8b1b-844e-435e-8656-4fbef59b74af","Type":"ContainerStarted","Data":"aa359aeefefdd33834e910f76ddf9d629825cde0e36a6d28027abc032a229c78"} Dec 03 11:15:53 crc kubenswrapper[4756]: I1203 11:15:53.126529 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77240248-19ee-4d0b-9d01-781b90e90ebf","Type":"ContainerStarted","Data":"01c30b4757126fe17d74ecba9e59be57d98720dd9970c5e68388b03d441d9cd0"} Dec 03 11:15:53 crc kubenswrapper[4756]: I1203 11:15:53.131546 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8p42c" event={"ID":"4962e497-d702-41df-b7b7-a9a873359aa3","Type":"ContainerStarted","Data":"e4809e7721150437f9f762e4911677545262d233c63a29f54c3681789e3e7b59"} Dec 03 11:15:54 crc kubenswrapper[4756]: I1203 11:15:54.144987 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lgdzl" event={"ID":"a00eb590-84e6-4087-b370-226af97b869a","Type":"ContainerStarted","Data":"c653ce976c069c796fd188682eb0ea90478aeec3290ba71cb6b2b53343724ca4"} Dec 03 11:15:54 crc kubenswrapper[4756]: I1203 11:15:54.149689 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" event={"ID":"a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c","Type":"ContainerStarted","Data":"35b01d66dafca6aa51157286bbe05779718ecdac89e337a88a31c6c4ced939ec"} Dec 03 11:15:54 crc kubenswrapper[4756]: I1203 11:15:54.152381 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-149d-account-create-update-2gcn6" event={"ID":"875a8b1b-844e-435e-8656-4fbef59b74af","Type":"ContainerStarted","Data":"faa409fa4b5d5e8b5263ddee5a8a43765103db243621d2699afcce694d3d82a9"} Dec 03 11:15:54 crc kubenswrapper[4756]: I1203 11:15:54.157272 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8p42c" event={"ID":"4962e497-d702-41df-b7b7-a9a873359aa3","Type":"ContainerStarted","Data":"6acedcd053e8ffafae6e69be9ee8202144a19cc0252349e48af67fecdf0d3584"} Dec 03 11:15:54 crc kubenswrapper[4756]: I1203 11:15:54.159869 4756 generic.go:334] "Generic (PLEG): container finished" podID="2bf87416-9370-4447-8d8d-5d303f5126ae" containerID="7f826bf7b3ab2a57f705974b98d0770dbacbbcce3d3ea1d4f343433681d657f5" exitCode=0 Dec 03 11:15:54 crc kubenswrapper[4756]: I1203 11:15:54.159972 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8fddf" event={"ID":"2bf87416-9370-4447-8d8d-5d303f5126ae","Type":"ContainerDied","Data":"7f826bf7b3ab2a57f705974b98d0770dbacbbcce3d3ea1d4f343433681d657f5"} Dec 03 11:15:54 crc kubenswrapper[4756]: I1203 11:15:54.162244 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-99a3-account-create-update-mlb88" event={"ID":"59a9719b-6d2e-4f77-8a55-0bfb9b29d29a","Type":"ContainerStarted","Data":"d750690109fc9e17b0952f0e84617875a1110328cff2cad951b5d8b66a8aa14f"} Dec 03 11:15:54 crc kubenswrapper[4756]: I1203 11:15:54.170855 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-lgdzl" podStartSLOduration=4.170836503 podStartE2EDuration="4.170836503s" podCreationTimestamp="2025-12-03 11:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:15:54.167625133 +0000 UTC m=+1365.197626377" watchObservedRunningTime="2025-12-03 11:15:54.170836503 +0000 UTC m=+1365.200837747" Dec 03 11:15:54 crc kubenswrapper[4756]: I1203 11:15:54.189390 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-8p42c" podStartSLOduration=3.189367231 podStartE2EDuration="3.189367231s" podCreationTimestamp="2025-12-03 11:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:15:54.185232552 +0000 UTC m=+1365.215233796" watchObservedRunningTime="2025-12-03 11:15:54.189367231 +0000 UTC m=+1365.219368475" Dec 03 11:15:54 crc kubenswrapper[4756]: I1203 11:15:54.218744 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-149d-account-create-update-2gcn6" podStartSLOduration=3.2187200750000002 podStartE2EDuration="3.218720075s" podCreationTimestamp="2025-12-03 11:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:15:54.212340246 +0000 UTC m=+1365.242341490" watchObservedRunningTime="2025-12-03 11:15:54.218720075 +0000 UTC m=+1365.248721319" Dec 03 11:15:54 crc kubenswrapper[4756]: I1203 11:15:54.237073 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" podStartSLOduration=3.237049626 podStartE2EDuration="3.237049626s" podCreationTimestamp="2025-12-03 11:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:15:54.232674929 +0000 UTC m=+1365.262676183" watchObservedRunningTime="2025-12-03 11:15:54.237049626 +0000 UTC m=+1365.267050870" Dec 03 11:15:54 crc kubenswrapper[4756]: I1203 11:15:54.290933 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.290909643 podStartE2EDuration="7.290909643s" podCreationTimestamp="2025-12-03 11:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:15:54.284873624 +0000 UTC m=+1365.314874858" watchObservedRunningTime="2025-12-03 11:15:54.290909643 +0000 UTC m=+1365.320910887" Dec 03 11:15:54 crc kubenswrapper[4756]: I1203 11:15:54.374450 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-99a3-account-create-update-mlb88" podStartSLOduration=3.374422583 podStartE2EDuration="3.374422583s" podCreationTimestamp="2025-12-03 11:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:15:54.369127168 +0000 UTC m=+1365.399128432" watchObservedRunningTime="2025-12-03 11:15:54.374422583 +0000 UTC m=+1365.404423837" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.265530 4756 generic.go:334] "Generic (PLEG): container finished" podID="a00eb590-84e6-4087-b370-226af97b869a" containerID="c653ce976c069c796fd188682eb0ea90478aeec3290ba71cb6b2b53343724ca4" exitCode=0 Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.266504 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lgdzl" event={"ID":"a00eb590-84e6-4087-b370-226af97b869a","Type":"ContainerDied","Data":"c653ce976c069c796fd188682eb0ea90478aeec3290ba71cb6b2b53343724ca4"} Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.275509 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77240248-19ee-4d0b-9d01-781b90e90ebf","Type":"ContainerStarted","Data":"df248aaab60b00321758957d470ceb1512c41bd5b557d83155ffc3797da84144"} Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.292023 4756 generic.go:334] "Generic (PLEG): container finished" podID="4962e497-d702-41df-b7b7-a9a873359aa3" containerID="6acedcd053e8ffafae6e69be9ee8202144a19cc0252349e48af67fecdf0d3584" exitCode=0 Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.292111 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8p42c" event={"ID":"4962e497-d702-41df-b7b7-a9a873359aa3","Type":"ContainerDied","Data":"6acedcd053e8ffafae6e69be9ee8202144a19cc0252349e48af67fecdf0d3584"} Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.295154 4756 generic.go:334] "Generic (PLEG): container finished" podID="0507f2c0-f9b3-490f-b2ac-45be22f96c05" containerID="57ef720a099194ecbf8b5b1c61e2a1bce3f746f846b58554616b2e68920c1550" exitCode=0 Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.296252 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0507f2c0-f9b3-490f-b2ac-45be22f96c05","Type":"ContainerDied","Data":"57ef720a099194ecbf8b5b1c61e2a1bce3f746f846b58554616b2e68920c1550"} Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.478215 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.510545 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-config-data\") pod \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.510743 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-public-tls-certs\") pod \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.510834 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrbmj\" (UniqueName: \"kubernetes.io/projected/0507f2c0-f9b3-490f-b2ac-45be22f96c05-kube-api-access-xrbmj\") pod \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.510913 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0507f2c0-f9b3-490f-b2ac-45be22f96c05-httpd-run\") pod \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.511302 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-scripts\") pod \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.511439 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-combined-ca-bundle\") pod \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.511496 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0507f2c0-f9b3-490f-b2ac-45be22f96c05-logs\") pod \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.511562 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\" (UID: \"0507f2c0-f9b3-490f-b2ac-45be22f96c05\") " Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.513493 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0507f2c0-f9b3-490f-b2ac-45be22f96c05-logs" (OuterVolumeSpecName: "logs") pod "0507f2c0-f9b3-490f-b2ac-45be22f96c05" (UID: "0507f2c0-f9b3-490f-b2ac-45be22f96c05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.514380 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0507f2c0-f9b3-490f-b2ac-45be22f96c05-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0507f2c0-f9b3-490f-b2ac-45be22f96c05" (UID: "0507f2c0-f9b3-490f-b2ac-45be22f96c05"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.519697 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "0507f2c0-f9b3-490f-b2ac-45be22f96c05" (UID: "0507f2c0-f9b3-490f-b2ac-45be22f96c05"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.558279 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-scripts" (OuterVolumeSpecName: "scripts") pod "0507f2c0-f9b3-490f-b2ac-45be22f96c05" (UID: "0507f2c0-f9b3-490f-b2ac-45be22f96c05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.587964 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0507f2c0-f9b3-490f-b2ac-45be22f96c05-kube-api-access-xrbmj" (OuterVolumeSpecName: "kube-api-access-xrbmj") pod "0507f2c0-f9b3-490f-b2ac-45be22f96c05" (UID: "0507f2c0-f9b3-490f-b2ac-45be22f96c05"). InnerVolumeSpecName "kube-api-access-xrbmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.615437 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0507f2c0-f9b3-490f-b2ac-45be22f96c05" (UID: "0507f2c0-f9b3-490f-b2ac-45be22f96c05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.618260 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrbmj\" (UniqueName: \"kubernetes.io/projected/0507f2c0-f9b3-490f-b2ac-45be22f96c05-kube-api-access-xrbmj\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.619677 4756 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0507f2c0-f9b3-490f-b2ac-45be22f96c05-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.620372 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.620488 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.620606 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0507f2c0-f9b3-490f-b2ac-45be22f96c05-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.620718 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.681180 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-config-data" (OuterVolumeSpecName: "config-data") pod "0507f2c0-f9b3-490f-b2ac-45be22f96c05" (UID: "0507f2c0-f9b3-490f-b2ac-45be22f96c05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.685911 4756 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.722561 4756 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.722599 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.722943 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8fddf" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.723508 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0507f2c0-f9b3-490f-b2ac-45be22f96c05" (UID: "0507f2c0-f9b3-490f-b2ac-45be22f96c05"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.823743 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncfjc\" (UniqueName: \"kubernetes.io/projected/2bf87416-9370-4447-8d8d-5d303f5126ae-kube-api-access-ncfjc\") pod \"2bf87416-9370-4447-8d8d-5d303f5126ae\" (UID: \"2bf87416-9370-4447-8d8d-5d303f5126ae\") " Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.823846 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bf87416-9370-4447-8d8d-5d303f5126ae-operator-scripts\") pod \"2bf87416-9370-4447-8d8d-5d303f5126ae\" (UID: \"2bf87416-9370-4447-8d8d-5d303f5126ae\") " Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.824591 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bf87416-9370-4447-8d8d-5d303f5126ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2bf87416-9370-4447-8d8d-5d303f5126ae" (UID: "2bf87416-9370-4447-8d8d-5d303f5126ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.824805 4756 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0507f2c0-f9b3-490f-b2ac-45be22f96c05-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.824831 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bf87416-9370-4447-8d8d-5d303f5126ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.831230 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bf87416-9370-4447-8d8d-5d303f5126ae-kube-api-access-ncfjc" (OuterVolumeSpecName: "kube-api-access-ncfjc") pod "2bf87416-9370-4447-8d8d-5d303f5126ae" (UID: "2bf87416-9370-4447-8d8d-5d303f5126ae"). InnerVolumeSpecName "kube-api-access-ncfjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:55 crc kubenswrapper[4756]: I1203 11:15:55.927060 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncfjc\" (UniqueName: \"kubernetes.io/projected/2bf87416-9370-4447-8d8d-5d303f5126ae-kube-api-access-ncfjc\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.307801 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8fddf" event={"ID":"2bf87416-9370-4447-8d8d-5d303f5126ae","Type":"ContainerDied","Data":"e1fd7f4d586e3fcf85b501e8af5e0204dd83cb252ba6cc2ae7723f9c97e6b880"} Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.308468 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1fd7f4d586e3fcf85b501e8af5e0204dd83cb252ba6cc2ae7723f9c97e6b880" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.308609 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8fddf" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.439229 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.442715 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0507f2c0-f9b3-490f-b2ac-45be22f96c05","Type":"ContainerDied","Data":"93216fc6237a44e02fcd4d3c34bc22d27bc6190de701bcf1020d476eabf31d8c"} Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.444065 4756 scope.go:117] "RemoveContainer" containerID="57ef720a099194ecbf8b5b1c61e2a1bce3f746f846b58554616b2e68920c1550" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.544114 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.554277 4756 scope.go:117] "RemoveContainer" containerID="55ef2c10a25c455e62cb0a614e6a482155febc18f3aa686f80e8f9e268e7f293" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.571442 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.608883 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:15:56 crc kubenswrapper[4756]: E1203 11:15:56.609597 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0507f2c0-f9b3-490f-b2ac-45be22f96c05" containerName="glance-httpd" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.609618 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0507f2c0-f9b3-490f-b2ac-45be22f96c05" containerName="glance-httpd" Dec 03 11:15:56 crc kubenswrapper[4756]: E1203 11:15:56.609644 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0507f2c0-f9b3-490f-b2ac-45be22f96c05" containerName="glance-log" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.609653 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0507f2c0-f9b3-490f-b2ac-45be22f96c05" containerName="glance-log" Dec 03 11:15:56 crc kubenswrapper[4756]: E1203 11:15:56.609665 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bf87416-9370-4447-8d8d-5d303f5126ae" containerName="mariadb-database-create" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.609676 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bf87416-9370-4447-8d8d-5d303f5126ae" containerName="mariadb-database-create" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.609945 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0507f2c0-f9b3-490f-b2ac-45be22f96c05" containerName="glance-httpd" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.609991 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bf87416-9370-4447-8d8d-5d303f5126ae" containerName="mariadb-database-create" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.610019 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0507f2c0-f9b3-490f-b2ac-45be22f96c05" containerName="glance-log" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.611610 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.620263 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.620663 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.625045 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.726024 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b66578e3-f591-49d8-b76a-1fc8f8f0d262-logs\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.726115 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg66p\" (UniqueName: \"kubernetes.io/projected/b66578e3-f591-49d8-b76a-1fc8f8f0d262-kube-api-access-pg66p\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.726145 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b66578e3-f591-49d8-b76a-1fc8f8f0d262-config-data\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.726172 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b66578e3-f591-49d8-b76a-1fc8f8f0d262-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.726209 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b66578e3-f591-49d8-b76a-1fc8f8f0d262-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.726237 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b66578e3-f591-49d8-b76a-1fc8f8f0d262-scripts\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.726261 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.726361 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b66578e3-f591-49d8-b76a-1fc8f8f0d262-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.829559 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b66578e3-f591-49d8-b76a-1fc8f8f0d262-logs\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.829680 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg66p\" (UniqueName: \"kubernetes.io/projected/b66578e3-f591-49d8-b76a-1fc8f8f0d262-kube-api-access-pg66p\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.829722 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b66578e3-f591-49d8-b76a-1fc8f8f0d262-config-data\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.829762 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b66578e3-f591-49d8-b76a-1fc8f8f0d262-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.829822 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b66578e3-f591-49d8-b76a-1fc8f8f0d262-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.829862 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b66578e3-f591-49d8-b76a-1fc8f8f0d262-scripts\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.829892 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.830083 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b66578e3-f591-49d8-b76a-1fc8f8f0d262-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.831067 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b66578e3-f591-49d8-b76a-1fc8f8f0d262-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.831652 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b66578e3-f591-49d8-b76a-1fc8f8f0d262-logs\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.845665 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.849029 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b66578e3-f591-49d8-b76a-1fc8f8f0d262-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.849647 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b66578e3-f591-49d8-b76a-1fc8f8f0d262-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.857002 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b66578e3-f591-49d8-b76a-1fc8f8f0d262-config-data\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.873866 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b66578e3-f591-49d8-b76a-1fc8f8f0d262-scripts\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.889213 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg66p\" (UniqueName: \"kubernetes.io/projected/b66578e3-f591-49d8-b76a-1fc8f8f0d262-kube-api-access-pg66p\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.916617 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"b66578e3-f591-49d8-b76a-1fc8f8f0d262\") " pod="openstack/glance-default-external-api-0" Dec 03 11:15:56 crc kubenswrapper[4756]: I1203 11:15:56.972910 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.159688 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8p42c" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.205672 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lgdzl" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.258089 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4962e497-d702-41df-b7b7-a9a873359aa3-operator-scripts\") pod \"4962e497-d702-41df-b7b7-a9a873359aa3\" (UID: \"4962e497-d702-41df-b7b7-a9a873359aa3\") " Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.258263 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c5rg\" (UniqueName: \"kubernetes.io/projected/4962e497-d702-41df-b7b7-a9a873359aa3-kube-api-access-6c5rg\") pod \"4962e497-d702-41df-b7b7-a9a873359aa3\" (UID: \"4962e497-d702-41df-b7b7-a9a873359aa3\") " Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.260011 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4962e497-d702-41df-b7b7-a9a873359aa3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4962e497-d702-41df-b7b7-a9a873359aa3" (UID: "4962e497-d702-41df-b7b7-a9a873359aa3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.268644 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4962e497-d702-41df-b7b7-a9a873359aa3-kube-api-access-6c5rg" (OuterVolumeSpecName: "kube-api-access-6c5rg") pod "4962e497-d702-41df-b7b7-a9a873359aa3" (UID: "4962e497-d702-41df-b7b7-a9a873359aa3"). InnerVolumeSpecName "kube-api-access-6c5rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.272330 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0507f2c0-f9b3-490f-b2ac-45be22f96c05" path="/var/lib/kubelet/pods/0507f2c0-f9b3-490f-b2ac-45be22f96c05/volumes" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.380368 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00eb590-84e6-4087-b370-226af97b869a-operator-scripts\") pod \"a00eb590-84e6-4087-b370-226af97b869a\" (UID: \"a00eb590-84e6-4087-b370-226af97b869a\") " Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.381129 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkhrp\" (UniqueName: \"kubernetes.io/projected/a00eb590-84e6-4087-b370-226af97b869a-kube-api-access-bkhrp\") pod \"a00eb590-84e6-4087-b370-226af97b869a\" (UID: \"a00eb590-84e6-4087-b370-226af97b869a\") " Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.381747 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4962e497-d702-41df-b7b7-a9a873359aa3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.381763 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c5rg\" (UniqueName: \"kubernetes.io/projected/4962e497-d702-41df-b7b7-a9a873359aa3-kube-api-access-6c5rg\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.384167 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00eb590-84e6-4087-b370-226af97b869a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a00eb590-84e6-4087-b370-226af97b869a" (UID: "a00eb590-84e6-4087-b370-226af97b869a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.391261 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00eb590-84e6-4087-b370-226af97b869a-kube-api-access-bkhrp" (OuterVolumeSpecName: "kube-api-access-bkhrp") pod "a00eb590-84e6-4087-b370-226af97b869a" (UID: "a00eb590-84e6-4087-b370-226af97b869a"). InnerVolumeSpecName "kube-api-access-bkhrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.455054 4756 generic.go:334] "Generic (PLEG): container finished" podID="875a8b1b-844e-435e-8656-4fbef59b74af" containerID="faa409fa4b5d5e8b5263ddee5a8a43765103db243621d2699afcce694d3d82a9" exitCode=0 Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.455188 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-149d-account-create-update-2gcn6" event={"ID":"875a8b1b-844e-435e-8656-4fbef59b74af","Type":"ContainerDied","Data":"faa409fa4b5d5e8b5263ddee5a8a43765103db243621d2699afcce694d3d82a9"} Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.483356 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77240248-19ee-4d0b-9d01-781b90e90ebf","Type":"ContainerStarted","Data":"1f20e1ee8f4c366a8e43c803e1ea7865486452299b45b013ec06fce32bef4623"} Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.484743 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.491850 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00eb590-84e6-4087-b370-226af97b869a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.491889 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkhrp\" (UniqueName: \"kubernetes.io/projected/a00eb590-84e6-4087-b370-226af97b869a-kube-api-access-bkhrp\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.518053 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8p42c" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.519595 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8p42c" event={"ID":"4962e497-d702-41df-b7b7-a9a873359aa3","Type":"ContainerDied","Data":"e4809e7721150437f9f762e4911677545262d233c63a29f54c3681789e3e7b59"} Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.519639 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4809e7721150437f9f762e4911677545262d233c63a29f54c3681789e3e7b59" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.519926 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.520705 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.541672 4756 generic.go:334] "Generic (PLEG): container finished" podID="59a9719b-6d2e-4f77-8a55-0bfb9b29d29a" containerID="d750690109fc9e17b0952f0e84617875a1110328cff2cad951b5d8b66a8aa14f" exitCode=0 Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.541750 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-99a3-account-create-update-mlb88" event={"ID":"59a9719b-6d2e-4f77-8a55-0bfb9b29d29a","Type":"ContainerDied","Data":"d750690109fc9e17b0952f0e84617875a1110328cff2cad951b5d8b66a8aa14f"} Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.544117 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lgdzl" event={"ID":"a00eb590-84e6-4087-b370-226af97b869a","Type":"ContainerDied","Data":"213f1698b118644b46fbd038ef1d9c7beb6d3e307e94aa222b8381c836b614fd"} Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.544234 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="213f1698b118644b46fbd038ef1d9c7beb6d3e307e94aa222b8381c836b614fd" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.544343 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lgdzl" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.544914 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7810090929999998 podStartE2EDuration="14.544884209s" podCreationTimestamp="2025-12-03 11:15:43 +0000 UTC" firstStartedPulling="2025-12-03 11:15:45.004230092 +0000 UTC m=+1356.034231336" lastFinishedPulling="2025-12-03 11:15:56.768105208 +0000 UTC m=+1367.798106452" observedRunningTime="2025-12-03 11:15:57.530386048 +0000 UTC m=+1368.560387292" watchObservedRunningTime="2025-12-03 11:15:57.544884209 +0000 UTC m=+1368.574885453" Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.554761 4756 generic.go:334] "Generic (PLEG): container finished" podID="a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c" containerID="35b01d66dafca6aa51157286bbe05779718ecdac89e337a88a31c6c4ced939ec" exitCode=0 Dec 03 11:15:57 crc kubenswrapper[4756]: I1203 11:15:57.554829 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" event={"ID":"a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c","Type":"ContainerDied","Data":"35b01d66dafca6aa51157286bbe05779718ecdac89e337a88a31c6c4ced939ec"} Dec 03 11:15:58 crc kubenswrapper[4756]: I1203 11:15:58.006981 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 11:15:58 crc kubenswrapper[4756]: I1203 11:15:58.041660 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:58 crc kubenswrapper[4756]: I1203 11:15:58.060668 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:58 crc kubenswrapper[4756]: I1203 11:15:58.573026 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b66578e3-f591-49d8-b76a-1fc8f8f0d262","Type":"ContainerStarted","Data":"190dc2b01b007d0c6aec75bc3990067fdf94bc0556bea44f18d3cb1d13719dcd"} Dec 03 11:15:58 crc kubenswrapper[4756]: I1203 11:15:58.573665 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:58 crc kubenswrapper[4756]: I1203 11:15:58.573707 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.140563 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.303221 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpj5n\" (UniqueName: \"kubernetes.io/projected/a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c-kube-api-access-fpj5n\") pod \"a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c\" (UID: \"a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c\") " Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.303407 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c-operator-scripts\") pod \"a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c\" (UID: \"a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c\") " Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.306695 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c" (UID: "a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.334639 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c-kube-api-access-fpj5n" (OuterVolumeSpecName: "kube-api-access-fpj5n") pod "a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c" (UID: "a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c"). InnerVolumeSpecName "kube-api-access-fpj5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.407422 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpj5n\" (UniqueName: \"kubernetes.io/projected/a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c-kube-api-access-fpj5n\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.407834 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.436205 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-149d-account-create-update-2gcn6" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.447190 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-99a3-account-create-update-mlb88" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.591513 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-149d-account-create-update-2gcn6" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.591538 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-149d-account-create-update-2gcn6" event={"ID":"875a8b1b-844e-435e-8656-4fbef59b74af","Type":"ContainerDied","Data":"aa359aeefefdd33834e910f76ddf9d629825cde0e36a6d28027abc032a229c78"} Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.591595 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa359aeefefdd33834e910f76ddf9d629825cde0e36a6d28027abc032a229c78" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.596726 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-99a3-account-create-update-mlb88" event={"ID":"59a9719b-6d2e-4f77-8a55-0bfb9b29d29a","Type":"ContainerDied","Data":"f66bfa0e1a94ea2b0f28e9c1bbd7241939d69e268c15270ee9633ccfb390ac08"} Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.596782 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f66bfa0e1a94ea2b0f28e9c1bbd7241939d69e268c15270ee9633ccfb390ac08" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.596866 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-99a3-account-create-update-mlb88" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.601172 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.606212 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-aad7-account-create-update-9hdq8" event={"ID":"a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c","Type":"ContainerDied","Data":"101f59ef567a12e76a348ce3fb79f2aa4a0486ba28af5aabc6dc902d0959b490"} Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.606275 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="101f59ef567a12e76a348ce3fb79f2aa4a0486ba28af5aabc6dc902d0959b490" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.621280 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgcd8\" (UniqueName: \"kubernetes.io/projected/59a9719b-6d2e-4f77-8a55-0bfb9b29d29a-kube-api-access-kgcd8\") pod \"59a9719b-6d2e-4f77-8a55-0bfb9b29d29a\" (UID: \"59a9719b-6d2e-4f77-8a55-0bfb9b29d29a\") " Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.621731 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/875a8b1b-844e-435e-8656-4fbef59b74af-operator-scripts\") pod \"875a8b1b-844e-435e-8656-4fbef59b74af\" (UID: \"875a8b1b-844e-435e-8656-4fbef59b74af\") " Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.621884 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a9719b-6d2e-4f77-8a55-0bfb9b29d29a-operator-scripts\") pod \"59a9719b-6d2e-4f77-8a55-0bfb9b29d29a\" (UID: \"59a9719b-6d2e-4f77-8a55-0bfb9b29d29a\") " Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.622113 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz45g\" (UniqueName: \"kubernetes.io/projected/875a8b1b-844e-435e-8656-4fbef59b74af-kube-api-access-wz45g\") pod \"875a8b1b-844e-435e-8656-4fbef59b74af\" (UID: \"875a8b1b-844e-435e-8656-4fbef59b74af\") " Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.622633 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a9719b-6d2e-4f77-8a55-0bfb9b29d29a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59a9719b-6d2e-4f77-8a55-0bfb9b29d29a" (UID: "59a9719b-6d2e-4f77-8a55-0bfb9b29d29a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.622946 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/875a8b1b-844e-435e-8656-4fbef59b74af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "875a8b1b-844e-435e-8656-4fbef59b74af" (UID: "875a8b1b-844e-435e-8656-4fbef59b74af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.624413 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/875a8b1b-844e-435e-8656-4fbef59b74af-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.624465 4756 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59a9719b-6d2e-4f77-8a55-0bfb9b29d29a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.630348 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a9719b-6d2e-4f77-8a55-0bfb9b29d29a-kube-api-access-kgcd8" (OuterVolumeSpecName: "kube-api-access-kgcd8") pod "59a9719b-6d2e-4f77-8a55-0bfb9b29d29a" (UID: "59a9719b-6d2e-4f77-8a55-0bfb9b29d29a"). InnerVolumeSpecName "kube-api-access-kgcd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.642238 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/875a8b1b-844e-435e-8656-4fbef59b74af-kube-api-access-wz45g" (OuterVolumeSpecName: "kube-api-access-wz45g") pod "875a8b1b-844e-435e-8656-4fbef59b74af" (UID: "875a8b1b-844e-435e-8656-4fbef59b74af"). InnerVolumeSpecName "kube-api-access-wz45g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.727041 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgcd8\" (UniqueName: \"kubernetes.io/projected/59a9719b-6d2e-4f77-8a55-0bfb9b29d29a-kube-api-access-kgcd8\") on node \"crc\" DevicePath \"\"" Dec 03 11:15:59 crc kubenswrapper[4756]: I1203 11:15:59.727806 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz45g\" (UniqueName: \"kubernetes.io/projected/875a8b1b-844e-435e-8656-4fbef59b74af-kube-api-access-wz45g\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:00 crc kubenswrapper[4756]: I1203 11:16:00.248684 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66bc647888-tcn4m" podUID="00c35a0d-70b4-453d-974a-85b638505280" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 03 11:16:00 crc kubenswrapper[4756]: I1203 11:16:00.623514 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b66578e3-f591-49d8-b76a-1fc8f8f0d262","Type":"ContainerStarted","Data":"f98b7d7c24763e63879b17ab958ed7dd44a6e9e4ee99d0192bccdcdaf32ced08"} Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.503748 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.504459 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.669797 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b66578e3-f591-49d8-b76a-1fc8f8f0d262","Type":"ContainerStarted","Data":"60fda0a16f5634f5d4bc3fc7068cdf5ee2a65c407553ab95e5b87776433a18c4"} Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.703581 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.703552691 podStartE2EDuration="5.703552691s" podCreationTimestamp="2025-12-03 11:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:16:01.697571484 +0000 UTC m=+1372.727572728" watchObservedRunningTime="2025-12-03 11:16:01.703552691 +0000 UTC m=+1372.733553935" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.881246 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nq8bd"] Dec 03 11:16:01 crc kubenswrapper[4756]: E1203 11:16:01.882165 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a9719b-6d2e-4f77-8a55-0bfb9b29d29a" containerName="mariadb-account-create-update" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.882256 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a9719b-6d2e-4f77-8a55-0bfb9b29d29a" containerName="mariadb-account-create-update" Dec 03 11:16:01 crc kubenswrapper[4756]: E1203 11:16:01.882320 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875a8b1b-844e-435e-8656-4fbef59b74af" containerName="mariadb-account-create-update" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.882398 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="875a8b1b-844e-435e-8656-4fbef59b74af" containerName="mariadb-account-create-update" Dec 03 11:16:01 crc kubenswrapper[4756]: E1203 11:16:01.882475 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4962e497-d702-41df-b7b7-a9a873359aa3" containerName="mariadb-database-create" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.882532 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4962e497-d702-41df-b7b7-a9a873359aa3" containerName="mariadb-database-create" Dec 03 11:16:01 crc kubenswrapper[4756]: E1203 11:16:01.882596 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c" containerName="mariadb-account-create-update" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.882645 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c" containerName="mariadb-account-create-update" Dec 03 11:16:01 crc kubenswrapper[4756]: E1203 11:16:01.882714 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00eb590-84e6-4087-b370-226af97b869a" containerName="mariadb-database-create" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.882770 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00eb590-84e6-4087-b370-226af97b869a" containerName="mariadb-database-create" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.883041 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="4962e497-d702-41df-b7b7-a9a873359aa3" containerName="mariadb-database-create" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.883143 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c" containerName="mariadb-account-create-update" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.887717 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="875a8b1b-844e-435e-8656-4fbef59b74af" containerName="mariadb-account-create-update" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.887835 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a9719b-6d2e-4f77-8a55-0bfb9b29d29a" containerName="mariadb-account-create-update" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.887921 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00eb590-84e6-4087-b370-226af97b869a" containerName="mariadb-database-create" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.888966 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.891309 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vzw9n" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.891511 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.893283 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.898455 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nq8bd"] Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.985862 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cchjf\" (UniqueName: \"kubernetes.io/projected/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-kube-api-access-cchjf\") pod \"nova-cell0-conductor-db-sync-nq8bd\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.985943 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-config-data\") pod \"nova-cell0-conductor-db-sync-nq8bd\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.986022 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nq8bd\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:01 crc kubenswrapper[4756]: I1203 11:16:01.986048 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-scripts\") pod \"nova-cell0-conductor-db-sync-nq8bd\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:02 crc kubenswrapper[4756]: I1203 11:16:02.070691 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 11:16:02 crc kubenswrapper[4756]: I1203 11:16:02.088976 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cchjf\" (UniqueName: \"kubernetes.io/projected/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-kube-api-access-cchjf\") pod \"nova-cell0-conductor-db-sync-nq8bd\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:02 crc kubenswrapper[4756]: I1203 11:16:02.089402 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-config-data\") pod \"nova-cell0-conductor-db-sync-nq8bd\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:02 crc kubenswrapper[4756]: I1203 11:16:02.089637 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nq8bd\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:02 crc kubenswrapper[4756]: I1203 11:16:02.089846 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-scripts\") pod \"nova-cell0-conductor-db-sync-nq8bd\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:02 crc kubenswrapper[4756]: I1203 11:16:02.104979 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-scripts\") pod \"nova-cell0-conductor-db-sync-nq8bd\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:02 crc kubenswrapper[4756]: I1203 11:16:02.124267 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nq8bd\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:02 crc kubenswrapper[4756]: I1203 11:16:02.127923 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-config-data\") pod \"nova-cell0-conductor-db-sync-nq8bd\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:02 crc kubenswrapper[4756]: I1203 11:16:02.130063 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cchjf\" (UniqueName: \"kubernetes.io/projected/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-kube-api-access-cchjf\") pod \"nova-cell0-conductor-db-sync-nq8bd\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:02 crc kubenswrapper[4756]: I1203 11:16:02.221088 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:03 crc kubenswrapper[4756]: I1203 11:16:03.101322 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nq8bd"] Dec 03 11:16:03 crc kubenswrapper[4756]: I1203 11:16:03.115129 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 11:16:03 crc kubenswrapper[4756]: I1203 11:16:03.706125 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nq8bd" event={"ID":"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33","Type":"ContainerStarted","Data":"670a2cb83949a30c843ec8f654a6ce5ef0826328b295ab243993b2c2dca09145"} Dec 03 11:16:03 crc kubenswrapper[4756]: I1203 11:16:03.925326 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:16:03 crc kubenswrapper[4756]: I1203 11:16:03.925802 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="ceilometer-central-agent" containerID="cri-o://1ddca1103b180640bb50d8052ca04d379078a1ae93df9f905176eb70a9e85f3f" gracePeriod=30 Dec 03 11:16:03 crc kubenswrapper[4756]: I1203 11:16:03.926603 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="proxy-httpd" containerID="cri-o://1f20e1ee8f4c366a8e43c803e1ea7865486452299b45b013ec06fce32bef4623" gracePeriod=30 Dec 03 11:16:03 crc kubenswrapper[4756]: I1203 11:16:03.926700 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="sg-core" containerID="cri-o://df248aaab60b00321758957d470ceb1512c41bd5b557d83155ffc3797da84144" gracePeriod=30 Dec 03 11:16:03 crc kubenswrapper[4756]: I1203 11:16:03.926778 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="ceilometer-notification-agent" containerID="cri-o://01c30b4757126fe17d74ecba9e59be57d98720dd9970c5e68388b03d441d9cd0" gracePeriod=30 Dec 03 11:16:04 crc kubenswrapper[4756]: I1203 11:16:04.729588 4756 generic.go:334] "Generic (PLEG): container finished" podID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerID="1f20e1ee8f4c366a8e43c803e1ea7865486452299b45b013ec06fce32bef4623" exitCode=0 Dec 03 11:16:04 crc kubenswrapper[4756]: I1203 11:16:04.729660 4756 generic.go:334] "Generic (PLEG): container finished" podID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerID="df248aaab60b00321758957d470ceb1512c41bd5b557d83155ffc3797da84144" exitCode=2 Dec 03 11:16:04 crc kubenswrapper[4756]: I1203 11:16:04.729688 4756 generic.go:334] "Generic (PLEG): container finished" podID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerID="01c30b4757126fe17d74ecba9e59be57d98720dd9970c5e68388b03d441d9cd0" exitCode=0 Dec 03 11:16:04 crc kubenswrapper[4756]: I1203 11:16:04.729702 4756 generic.go:334] "Generic (PLEG): container finished" podID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerID="1ddca1103b180640bb50d8052ca04d379078a1ae93df9f905176eb70a9e85f3f" exitCode=0 Dec 03 11:16:04 crc kubenswrapper[4756]: I1203 11:16:04.729732 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77240248-19ee-4d0b-9d01-781b90e90ebf","Type":"ContainerDied","Data":"1f20e1ee8f4c366a8e43c803e1ea7865486452299b45b013ec06fce32bef4623"} Dec 03 11:16:04 crc kubenswrapper[4756]: I1203 11:16:04.729785 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77240248-19ee-4d0b-9d01-781b90e90ebf","Type":"ContainerDied","Data":"df248aaab60b00321758957d470ceb1512c41bd5b557d83155ffc3797da84144"} Dec 03 11:16:04 crc kubenswrapper[4756]: I1203 11:16:04.729796 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77240248-19ee-4d0b-9d01-781b90e90ebf","Type":"ContainerDied","Data":"01c30b4757126fe17d74ecba9e59be57d98720dd9970c5e68388b03d441d9cd0"} Dec 03 11:16:04 crc kubenswrapper[4756]: I1203 11:16:04.729805 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77240248-19ee-4d0b-9d01-781b90e90ebf","Type":"ContainerDied","Data":"1ddca1103b180640bb50d8052ca04d379078a1ae93df9f905176eb70a9e85f3f"} Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.117051 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.199610 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-sg-core-conf-yaml\") pod \"77240248-19ee-4d0b-9d01-781b90e90ebf\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.199690 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77240248-19ee-4d0b-9d01-781b90e90ebf-run-httpd\") pod \"77240248-19ee-4d0b-9d01-781b90e90ebf\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.199770 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-scripts\") pod \"77240248-19ee-4d0b-9d01-781b90e90ebf\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.199866 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77240248-19ee-4d0b-9d01-781b90e90ebf-log-httpd\") pod \"77240248-19ee-4d0b-9d01-781b90e90ebf\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.199931 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-config-data\") pod \"77240248-19ee-4d0b-9d01-781b90e90ebf\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.200008 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-combined-ca-bundle\") pod \"77240248-19ee-4d0b-9d01-781b90e90ebf\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.200038 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhrk8\" (UniqueName: \"kubernetes.io/projected/77240248-19ee-4d0b-9d01-781b90e90ebf-kube-api-access-nhrk8\") pod \"77240248-19ee-4d0b-9d01-781b90e90ebf\" (UID: \"77240248-19ee-4d0b-9d01-781b90e90ebf\") " Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.200530 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77240248-19ee-4d0b-9d01-781b90e90ebf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "77240248-19ee-4d0b-9d01-781b90e90ebf" (UID: "77240248-19ee-4d0b-9d01-781b90e90ebf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.200643 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77240248-19ee-4d0b-9d01-781b90e90ebf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "77240248-19ee-4d0b-9d01-781b90e90ebf" (UID: "77240248-19ee-4d0b-9d01-781b90e90ebf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.207987 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77240248-19ee-4d0b-9d01-781b90e90ebf-kube-api-access-nhrk8" (OuterVolumeSpecName: "kube-api-access-nhrk8") pod "77240248-19ee-4d0b-9d01-781b90e90ebf" (UID: "77240248-19ee-4d0b-9d01-781b90e90ebf"). InnerVolumeSpecName "kube-api-access-nhrk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.228240 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-scripts" (OuterVolumeSpecName: "scripts") pod "77240248-19ee-4d0b-9d01-781b90e90ebf" (UID: "77240248-19ee-4d0b-9d01-781b90e90ebf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.237316 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "77240248-19ee-4d0b-9d01-781b90e90ebf" (UID: "77240248-19ee-4d0b-9d01-781b90e90ebf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.298161 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77240248-19ee-4d0b-9d01-781b90e90ebf" (UID: "77240248-19ee-4d0b-9d01-781b90e90ebf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.303035 4756 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77240248-19ee-4d0b-9d01-781b90e90ebf-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.303072 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.303084 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhrk8\" (UniqueName: \"kubernetes.io/projected/77240248-19ee-4d0b-9d01-781b90e90ebf-kube-api-access-nhrk8\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.303097 4756 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.303154 4756 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77240248-19ee-4d0b-9d01-781b90e90ebf-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.303166 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.354416 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-config-data" (OuterVolumeSpecName: "config-data") pod "77240248-19ee-4d0b-9d01-781b90e90ebf" (UID: "77240248-19ee-4d0b-9d01-781b90e90ebf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.405834 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77240248-19ee-4d0b-9d01-781b90e90ebf-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.750141 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77240248-19ee-4d0b-9d01-781b90e90ebf","Type":"ContainerDied","Data":"ed52ac8f2cc02ec837b721e8c48c63df77c127e4cc02c1b8a3ca452774e39e65"} Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.750251 4756 scope.go:117] "RemoveContainer" containerID="1f20e1ee8f4c366a8e43c803e1ea7865486452299b45b013ec06fce32bef4623" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.750469 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.784095 4756 scope.go:117] "RemoveContainer" containerID="df248aaab60b00321758957d470ceb1512c41bd5b557d83155ffc3797da84144" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.815492 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.830162 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.845866 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:16:05 crc kubenswrapper[4756]: E1203 11:16:05.846564 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="proxy-httpd" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.846583 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="proxy-httpd" Dec 03 11:16:05 crc kubenswrapper[4756]: E1203 11:16:05.846608 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="sg-core" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.846634 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="sg-core" Dec 03 11:16:05 crc kubenswrapper[4756]: E1203 11:16:05.846648 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="ceilometer-notification-agent" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.846657 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="ceilometer-notification-agent" Dec 03 11:16:05 crc kubenswrapper[4756]: E1203 11:16:05.846686 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="ceilometer-central-agent" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.846692 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="ceilometer-central-agent" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.847008 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="ceilometer-notification-agent" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.847032 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="sg-core" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.847042 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="proxy-httpd" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.847048 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" containerName="ceilometer-central-agent" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.849046 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.858390 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.858683 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.866719 4756 scope.go:117] "RemoveContainer" containerID="01c30b4757126fe17d74ecba9e59be57d98720dd9970c5e68388b03d441d9cd0" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.884998 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.918698 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.918783 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.918848 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pwgk\" (UniqueName: \"kubernetes.io/projected/b8bc9458-5084-48fb-8788-09746855e466-kube-api-access-6pwgk\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.918938 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-scripts\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.918994 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-config-data\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.919098 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8bc9458-5084-48fb-8788-09746855e466-run-httpd\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.919130 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8bc9458-5084-48fb-8788-09746855e466-log-httpd\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:05 crc kubenswrapper[4756]: I1203 11:16:05.940470 4756 scope.go:117] "RemoveContainer" containerID="1ddca1103b180640bb50d8052ca04d379078a1ae93df9f905176eb70a9e85f3f" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.021545 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.021643 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pwgk\" (UniqueName: \"kubernetes.io/projected/b8bc9458-5084-48fb-8788-09746855e466-kube-api-access-6pwgk\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.021713 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-scripts\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.021743 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-config-data\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.021821 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8bc9458-5084-48fb-8788-09746855e466-run-httpd\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.021852 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8bc9458-5084-48fb-8788-09746855e466-log-httpd\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.021933 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.023078 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8bc9458-5084-48fb-8788-09746855e466-log-httpd\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.023238 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8bc9458-5084-48fb-8788-09746855e466-run-httpd\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.030288 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-config-data\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.030582 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-scripts\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.040900 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.042762 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.043053 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pwgk\" (UniqueName: \"kubernetes.io/projected/b8bc9458-5084-48fb-8788-09746855e466-kube-api-access-6pwgk\") pod \"ceilometer-0\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.193515 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:16:06 crc kubenswrapper[4756]: W1203 11:16:06.732843 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8bc9458_5084_48fb_8788_09746855e466.slice/crio-bfbd026a938a791d320401d916934be715021279b81dff74c482f9ab1ed30df4 WatchSource:0}: Error finding container bfbd026a938a791d320401d916934be715021279b81dff74c482f9ab1ed30df4: Status 404 returned error can't find the container with id bfbd026a938a791d320401d916934be715021279b81dff74c482f9ab1ed30df4 Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.737930 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.857728 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8bc9458-5084-48fb-8788-09746855e466","Type":"ContainerStarted","Data":"bfbd026a938a791d320401d916934be715021279b81dff74c482f9ab1ed30df4"} Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.974845 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 11:16:06 crc kubenswrapper[4756]: I1203 11:16:06.974917 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 11:16:07 crc kubenswrapper[4756]: I1203 11:16:07.020120 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 11:16:07 crc kubenswrapper[4756]: I1203 11:16:07.024219 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 11:16:07 crc kubenswrapper[4756]: I1203 11:16:07.251819 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77240248-19ee-4d0b-9d01-781b90e90ebf" path="/var/lib/kubelet/pods/77240248-19ee-4d0b-9d01-781b90e90ebf/volumes" Dec 03 11:16:07 crc kubenswrapper[4756]: I1203 11:16:07.878851 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 11:16:07 crc kubenswrapper[4756]: I1203 11:16:07.878924 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 11:16:09 crc kubenswrapper[4756]: I1203 11:16:09.903399 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 11:16:09 crc kubenswrapper[4756]: I1203 11:16:09.904915 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 11:16:10 crc kubenswrapper[4756]: I1203 11:16:10.249772 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66bc647888-tcn4m" podUID="00c35a0d-70b4-453d-974a-85b638505280" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 03 11:16:10 crc kubenswrapper[4756]: I1203 11:16:10.381029 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 11:16:10 crc kubenswrapper[4756]: I1203 11:16:10.381131 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 11:16:11 crc kubenswrapper[4756]: I1203 11:16:11.290828 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:16:14 crc kubenswrapper[4756]: I1203 11:16:14.996739 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nq8bd" event={"ID":"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33","Type":"ContainerStarted","Data":"a6a2090366107a5298e564b6bef95bb3c71dba499c23e2820593240dc8ef407c"} Dec 03 11:16:15 crc kubenswrapper[4756]: I1203 11:16:15.005970 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8bc9458-5084-48fb-8788-09746855e466","Type":"ContainerStarted","Data":"441bf9906676742ebc0ca712b45d7f4ae809f43a25910f5a2e5988c54216e072"} Dec 03 11:16:16 crc kubenswrapper[4756]: I1203 11:16:16.019262 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8bc9458-5084-48fb-8788-09746855e466","Type":"ContainerStarted","Data":"1847c02b97a6340a3061e8134125c0d687f76d7362cccc0cbc2391c15ce6b732"} Dec 03 11:16:17 crc kubenswrapper[4756]: I1203 11:16:17.036906 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8bc9458-5084-48fb-8788-09746855e466","Type":"ContainerStarted","Data":"7afce633458212404031426d18b6bc68d8e9e2b1968d5ae24e7c32b100ba70f5"} Dec 03 11:16:19 crc kubenswrapper[4756]: I1203 11:16:19.062780 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8bc9458-5084-48fb-8788-09746855e466","Type":"ContainerStarted","Data":"c823edc259528035d5a2f7bb237d63c2a0ad079c0ddf2fa2761c4b11bda8b205"} Dec 03 11:16:19 crc kubenswrapper[4756]: I1203 11:16:19.063224 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="sg-core" containerID="cri-o://7afce633458212404031426d18b6bc68d8e9e2b1968d5ae24e7c32b100ba70f5" gracePeriod=30 Dec 03 11:16:19 crc kubenswrapper[4756]: I1203 11:16:19.063325 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="proxy-httpd" containerID="cri-o://c823edc259528035d5a2f7bb237d63c2a0ad079c0ddf2fa2761c4b11bda8b205" gracePeriod=30 Dec 03 11:16:19 crc kubenswrapper[4756]: I1203 11:16:19.063662 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="ceilometer-notification-agent" containerID="cri-o://1847c02b97a6340a3061e8134125c0d687f76d7362cccc0cbc2391c15ce6b732" gracePeriod=30 Dec 03 11:16:19 crc kubenswrapper[4756]: I1203 11:16:19.063811 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="ceilometer-central-agent" containerID="cri-o://441bf9906676742ebc0ca712b45d7f4ae809f43a25910f5a2e5988c54216e072" gracePeriod=30 Dec 03 11:16:19 crc kubenswrapper[4756]: I1203 11:16:19.094532 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.480264213 podStartE2EDuration="14.094473017s" podCreationTimestamp="2025-12-03 11:16:05 +0000 UTC" firstStartedPulling="2025-12-03 11:16:06.736435587 +0000 UTC m=+1377.766436831" lastFinishedPulling="2025-12-03 11:16:18.350644391 +0000 UTC m=+1389.380645635" observedRunningTime="2025-12-03 11:16:19.088767419 +0000 UTC m=+1390.118768663" watchObservedRunningTime="2025-12-03 11:16:19.094473017 +0000 UTC m=+1390.124474261" Dec 03 11:16:19 crc kubenswrapper[4756]: I1203 11:16:19.101849 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-nq8bd" podStartSLOduration=7.537273228 podStartE2EDuration="18.101824295s" podCreationTimestamp="2025-12-03 11:16:01 +0000 UTC" firstStartedPulling="2025-12-03 11:16:03.114850511 +0000 UTC m=+1374.144851755" lastFinishedPulling="2025-12-03 11:16:13.679401578 +0000 UTC m=+1384.709402822" observedRunningTime="2025-12-03 11:16:15.024525108 +0000 UTC m=+1386.054526352" watchObservedRunningTime="2025-12-03 11:16:19.101824295 +0000 UTC m=+1390.131825539" Dec 03 11:16:20 crc kubenswrapper[4756]: I1203 11:16:20.075906 4756 generic.go:334] "Generic (PLEG): container finished" podID="b8bc9458-5084-48fb-8788-09746855e466" containerID="c823edc259528035d5a2f7bb237d63c2a0ad079c0ddf2fa2761c4b11bda8b205" exitCode=0 Dec 03 11:16:20 crc kubenswrapper[4756]: I1203 11:16:20.076222 4756 generic.go:334] "Generic (PLEG): container finished" podID="b8bc9458-5084-48fb-8788-09746855e466" containerID="7afce633458212404031426d18b6bc68d8e9e2b1968d5ae24e7c32b100ba70f5" exitCode=2 Dec 03 11:16:20 crc kubenswrapper[4756]: I1203 11:16:20.076232 4756 generic.go:334] "Generic (PLEG): container finished" podID="b8bc9458-5084-48fb-8788-09746855e466" containerID="1847c02b97a6340a3061e8134125c0d687f76d7362cccc0cbc2391c15ce6b732" exitCode=0 Dec 03 11:16:20 crc kubenswrapper[4756]: I1203 11:16:20.076106 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8bc9458-5084-48fb-8788-09746855e466","Type":"ContainerDied","Data":"c823edc259528035d5a2f7bb237d63c2a0ad079c0ddf2fa2761c4b11bda8b205"} Dec 03 11:16:20 crc kubenswrapper[4756]: I1203 11:16:20.076274 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8bc9458-5084-48fb-8788-09746855e466","Type":"ContainerDied","Data":"7afce633458212404031426d18b6bc68d8e9e2b1968d5ae24e7c32b100ba70f5"} Dec 03 11:16:20 crc kubenswrapper[4756]: I1203 11:16:20.076291 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8bc9458-5084-48fb-8788-09746855e466","Type":"ContainerDied","Data":"1847c02b97a6340a3061e8134125c0d687f76d7362cccc0cbc2391c15ce6b732"} Dec 03 11:16:22 crc kubenswrapper[4756]: I1203 11:16:22.532417 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:16:22 crc kubenswrapper[4756]: I1203 11:16:22.607248 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:16:22 crc kubenswrapper[4756]: I1203 11:16:22.607343 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:16:22 crc kubenswrapper[4756]: I1203 11:16:22.607469 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 11:16:22 crc kubenswrapper[4756]: I1203 11:16:22.608429 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0355b17b2ab1c82e928552e43898507356fa2e99840efcf360f3485a28faa26"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 11:16:22 crc kubenswrapper[4756]: I1203 11:16:22.608491 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://e0355b17b2ab1c82e928552e43898507356fa2e99840efcf360f3485a28faa26" gracePeriod=600 Dec 03 11:16:23 crc kubenswrapper[4756]: I1203 11:16:23.111452 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="e0355b17b2ab1c82e928552e43898507356fa2e99840efcf360f3485a28faa26" exitCode=0 Dec 03 11:16:23 crc kubenswrapper[4756]: I1203 11:16:23.111680 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"e0355b17b2ab1c82e928552e43898507356fa2e99840efcf360f3485a28faa26"} Dec 03 11:16:23 crc kubenswrapper[4756]: I1203 11:16:23.112587 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367"} Dec 03 11:16:23 crc kubenswrapper[4756]: I1203 11:16:23.112654 4756 scope.go:117] "RemoveContainer" containerID="664265e67f7670380e0f58f5627ec2e4920ce372619dd56f4c84d4b06cd1734c" Dec 03 11:16:24 crc kubenswrapper[4756]: I1203 11:16:24.585739 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-66bc647888-tcn4m" Dec 03 11:16:24 crc kubenswrapper[4756]: I1203 11:16:24.676252 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5f6f9857bb-rncmk"] Dec 03 11:16:24 crc kubenswrapper[4756]: I1203 11:16:24.676516 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5f6f9857bb-rncmk" podUID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerName="horizon-log" containerID="cri-o://c653d5bf2c3fc4688bf9de690d9cb7282a458344e3d96b4a15f3ba05f2b6ecb5" gracePeriod=30 Dec 03 11:16:24 crc kubenswrapper[4756]: I1203 11:16:24.676897 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5f6f9857bb-rncmk" podUID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerName="horizon" containerID="cri-o://7ce0400b24e755e6d3cb0e45b9b47395b1f68edbb781cc9da8036fd050990808" gracePeriod=30 Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.202811 4756 generic.go:334] "Generic (PLEG): container finished" podID="b8bc9458-5084-48fb-8788-09746855e466" containerID="441bf9906676742ebc0ca712b45d7f4ae809f43a25910f5a2e5988c54216e072" exitCode=0 Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.203517 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8bc9458-5084-48fb-8788-09746855e466","Type":"ContainerDied","Data":"441bf9906676742ebc0ca712b45d7f4ae809f43a25910f5a2e5988c54216e072"} Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.337440 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.434143 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-config-data\") pod \"b8bc9458-5084-48fb-8788-09746855e466\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.434757 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8bc9458-5084-48fb-8788-09746855e466-run-httpd\") pod \"b8bc9458-5084-48fb-8788-09746855e466\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.434823 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pwgk\" (UniqueName: \"kubernetes.io/projected/b8bc9458-5084-48fb-8788-09746855e466-kube-api-access-6pwgk\") pod \"b8bc9458-5084-48fb-8788-09746855e466\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.434972 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-scripts\") pod \"b8bc9458-5084-48fb-8788-09746855e466\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.435006 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8bc9458-5084-48fb-8788-09746855e466-log-httpd\") pod \"b8bc9458-5084-48fb-8788-09746855e466\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.435206 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-combined-ca-bundle\") pod \"b8bc9458-5084-48fb-8788-09746855e466\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.435245 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-sg-core-conf-yaml\") pod \"b8bc9458-5084-48fb-8788-09746855e466\" (UID: \"b8bc9458-5084-48fb-8788-09746855e466\") " Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.435314 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8bc9458-5084-48fb-8788-09746855e466-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b8bc9458-5084-48fb-8788-09746855e466" (UID: "b8bc9458-5084-48fb-8788-09746855e466"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.435914 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8bc9458-5084-48fb-8788-09746855e466-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b8bc9458-5084-48fb-8788-09746855e466" (UID: "b8bc9458-5084-48fb-8788-09746855e466"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.435998 4756 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8bc9458-5084-48fb-8788-09746855e466-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.442440 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8bc9458-5084-48fb-8788-09746855e466-kube-api-access-6pwgk" (OuterVolumeSpecName: "kube-api-access-6pwgk") pod "b8bc9458-5084-48fb-8788-09746855e466" (UID: "b8bc9458-5084-48fb-8788-09746855e466"). InnerVolumeSpecName "kube-api-access-6pwgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.463442 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-scripts" (OuterVolumeSpecName: "scripts") pod "b8bc9458-5084-48fb-8788-09746855e466" (UID: "b8bc9458-5084-48fb-8788-09746855e466"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.469802 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b8bc9458-5084-48fb-8788-09746855e466" (UID: "b8bc9458-5084-48fb-8788-09746855e466"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.527520 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8bc9458-5084-48fb-8788-09746855e466" (UID: "b8bc9458-5084-48fb-8788-09746855e466"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.538226 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.538278 4756 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.538288 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pwgk\" (UniqueName: \"kubernetes.io/projected/b8bc9458-5084-48fb-8788-09746855e466-kube-api-access-6pwgk\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.538304 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.538314 4756 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8bc9458-5084-48fb-8788-09746855e466-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.569884 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-config-data" (OuterVolumeSpecName: "config-data") pod "b8bc9458-5084-48fb-8788-09746855e466" (UID: "b8bc9458-5084-48fb-8788-09746855e466"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:25 crc kubenswrapper[4756]: I1203 11:16:25.640382 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8bc9458-5084-48fb-8788-09746855e466-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.218126 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8bc9458-5084-48fb-8788-09746855e466","Type":"ContainerDied","Data":"bfbd026a938a791d320401d916934be715021279b81dff74c482f9ab1ed30df4"} Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.218196 4756 scope.go:117] "RemoveContainer" containerID="c823edc259528035d5a2f7bb237d63c2a0ad079c0ddf2fa2761c4b11bda8b205" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.218237 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.250528 4756 scope.go:117] "RemoveContainer" containerID="7afce633458212404031426d18b6bc68d8e9e2b1968d5ae24e7c32b100ba70f5" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.275024 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.292695 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.293060 4756 scope.go:117] "RemoveContainer" containerID="1847c02b97a6340a3061e8134125c0d687f76d7362cccc0cbc2391c15ce6b732" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.302914 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:16:26 crc kubenswrapper[4756]: E1203 11:16:26.303909 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="proxy-httpd" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.304089 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="proxy-httpd" Dec 03 11:16:26 crc kubenswrapper[4756]: E1203 11:16:26.304168 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="ceilometer-notification-agent" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.304219 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="ceilometer-notification-agent" Dec 03 11:16:26 crc kubenswrapper[4756]: E1203 11:16:26.304283 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="ceilometer-central-agent" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.304332 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="ceilometer-central-agent" Dec 03 11:16:26 crc kubenswrapper[4756]: E1203 11:16:26.304404 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="sg-core" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.304461 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="sg-core" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.304753 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="ceilometer-notification-agent" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.304830 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="sg-core" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.304901 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="ceilometer-central-agent" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.304975 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8bc9458-5084-48fb-8788-09746855e466" containerName="proxy-httpd" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.307093 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.317104 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.317461 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.329673 4756 scope.go:117] "RemoveContainer" containerID="441bf9906676742ebc0ca712b45d7f4ae809f43a25910f5a2e5988c54216e072" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.330861 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.356530 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfq66\" (UniqueName: \"kubernetes.io/projected/f0e483b8-8712-403c-bac4-f5893d35fc6f-kube-api-access-pfq66\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.356622 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-scripts\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.356655 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0e483b8-8712-403c-bac4-f5893d35fc6f-run-httpd\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.356706 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.356749 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.356775 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-config-data\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.356809 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0e483b8-8712-403c-bac4-f5893d35fc6f-log-httpd\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.459107 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfq66\" (UniqueName: \"kubernetes.io/projected/f0e483b8-8712-403c-bac4-f5893d35fc6f-kube-api-access-pfq66\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.459203 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-scripts\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.459226 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0e483b8-8712-403c-bac4-f5893d35fc6f-run-httpd\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.459269 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.459305 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.459346 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-config-data\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.459377 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0e483b8-8712-403c-bac4-f5893d35fc6f-log-httpd\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.459942 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0e483b8-8712-403c-bac4-f5893d35fc6f-log-httpd\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.461881 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0e483b8-8712-403c-bac4-f5893d35fc6f-run-httpd\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.468410 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.468828 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.471241 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-scripts\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.474412 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-config-data\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.481840 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfq66\" (UniqueName: \"kubernetes.io/projected/f0e483b8-8712-403c-bac4-f5893d35fc6f-kube-api-access-pfq66\") pod \"ceilometer-0\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " pod="openstack/ceilometer-0" Dec 03 11:16:26 crc kubenswrapper[4756]: I1203 11:16:26.636408 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:16:27 crc kubenswrapper[4756]: I1203 11:16:27.166478 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:16:27 crc kubenswrapper[4756]: W1203 11:16:27.168005 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0e483b8_8712_403c_bac4_f5893d35fc6f.slice/crio-d2c8ae5286a06413922f60da9663b679f72fdfb39a0a5d7d041784df5874c1d0 WatchSource:0}: Error finding container d2c8ae5286a06413922f60da9663b679f72fdfb39a0a5d7d041784df5874c1d0: Status 404 returned error can't find the container with id d2c8ae5286a06413922f60da9663b679f72fdfb39a0a5d7d041784df5874c1d0 Dec 03 11:16:27 crc kubenswrapper[4756]: I1203 11:16:27.232564 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0e483b8-8712-403c-bac4-f5893d35fc6f","Type":"ContainerStarted","Data":"d2c8ae5286a06413922f60da9663b679f72fdfb39a0a5d7d041784df5874c1d0"} Dec 03 11:16:27 crc kubenswrapper[4756]: I1203 11:16:27.248771 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8bc9458-5084-48fb-8788-09746855e466" path="/var/lib/kubelet/pods/b8bc9458-5084-48fb-8788-09746855e466/volumes" Dec 03 11:16:28 crc kubenswrapper[4756]: I1203 11:16:28.247747 4756 generic.go:334] "Generic (PLEG): container finished" podID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerID="7ce0400b24e755e6d3cb0e45b9b47395b1f68edbb781cc9da8036fd050990808" exitCode=0 Dec 03 11:16:28 crc kubenswrapper[4756]: I1203 11:16:28.248739 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f6f9857bb-rncmk" event={"ID":"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3","Type":"ContainerDied","Data":"7ce0400b24e755e6d3cb0e45b9b47395b1f68edbb781cc9da8036fd050990808"} Dec 03 11:16:28 crc kubenswrapper[4756]: I1203 11:16:28.251182 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0e483b8-8712-403c-bac4-f5893d35fc6f","Type":"ContainerStarted","Data":"0b4da00108b3a7d9599756253f75f90d7e77bba4e627b71d3b042aa00cac3ced"} Dec 03 11:16:29 crc kubenswrapper[4756]: I1203 11:16:29.317291 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0e483b8-8712-403c-bac4-f5893d35fc6f","Type":"ContainerStarted","Data":"18860787d480be681c30370fd83acbed4e19b291dc8ed776036be4e8e85eb1ad"} Dec 03 11:16:29 crc kubenswrapper[4756]: I1203 11:16:29.321919 4756 generic.go:334] "Generic (PLEG): container finished" podID="2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33" containerID="a6a2090366107a5298e564b6bef95bb3c71dba499c23e2820593240dc8ef407c" exitCode=0 Dec 03 11:16:29 crc kubenswrapper[4756]: I1203 11:16:29.321993 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nq8bd" event={"ID":"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33","Type":"ContainerDied","Data":"a6a2090366107a5298e564b6bef95bb3c71dba499c23e2820593240dc8ef407c"} Dec 03 11:16:29 crc kubenswrapper[4756]: I1203 11:16:29.925039 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5f6f9857bb-rncmk" podUID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Dec 03 11:16:30 crc kubenswrapper[4756]: I1203 11:16:30.337762 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0e483b8-8712-403c-bac4-f5893d35fc6f","Type":"ContainerStarted","Data":"4b83b11e9c464d963e1ad22c6a87f24141f861a10139d437bcf71b5494486356"} Dec 03 11:16:30 crc kubenswrapper[4756]: I1203 11:16:30.744139 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:30 crc kubenswrapper[4756]: I1203 11:16:30.854660 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-scripts\") pod \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " Dec 03 11:16:30 crc kubenswrapper[4756]: I1203 11:16:30.854788 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-combined-ca-bundle\") pod \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " Dec 03 11:16:30 crc kubenswrapper[4756]: I1203 11:16:30.855099 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-config-data\") pod \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " Dec 03 11:16:30 crc kubenswrapper[4756]: I1203 11:16:30.855212 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cchjf\" (UniqueName: \"kubernetes.io/projected/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-kube-api-access-cchjf\") pod \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\" (UID: \"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33\") " Dec 03 11:16:30 crc kubenswrapper[4756]: I1203 11:16:30.868331 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-scripts" (OuterVolumeSpecName: "scripts") pod "2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33" (UID: "2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:30 crc kubenswrapper[4756]: I1203 11:16:30.871272 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-kube-api-access-cchjf" (OuterVolumeSpecName: "kube-api-access-cchjf") pod "2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33" (UID: "2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33"). InnerVolumeSpecName "kube-api-access-cchjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:16:30 crc kubenswrapper[4756]: I1203 11:16:30.895018 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-config-data" (OuterVolumeSpecName: "config-data") pod "2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33" (UID: "2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:30 crc kubenswrapper[4756]: I1203 11:16:30.898676 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33" (UID: "2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:30 crc kubenswrapper[4756]: I1203 11:16:30.958666 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:30 crc kubenswrapper[4756]: I1203 11:16:30.959197 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:30 crc kubenswrapper[4756]: I1203 11:16:30.959220 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:30 crc kubenswrapper[4756]: I1203 11:16:30.959235 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cchjf\" (UniqueName: \"kubernetes.io/projected/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33-kube-api-access-cchjf\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.355444 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0e483b8-8712-403c-bac4-f5893d35fc6f","Type":"ContainerStarted","Data":"69d03621bf33942744d907ee6cbf84126f088eb600eded8e8a97b70fa8a556f0"} Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.357093 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.384221 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nq8bd" event={"ID":"2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33","Type":"ContainerDied","Data":"670a2cb83949a30c843ec8f654a6ce5ef0826328b295ab243993b2c2dca09145"} Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.384298 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="670a2cb83949a30c843ec8f654a6ce5ef0826328b295ab243993b2c2dca09145" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.384413 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nq8bd" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.554488 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.745060977 podStartE2EDuration="5.552822838s" podCreationTimestamp="2025-12-03 11:16:26 +0000 UTC" firstStartedPulling="2025-12-03 11:16:27.171082732 +0000 UTC m=+1398.201083976" lastFinishedPulling="2025-12-03 11:16:30.978844593 +0000 UTC m=+1402.008845837" observedRunningTime="2025-12-03 11:16:31.400664301 +0000 UTC m=+1402.430665545" watchObservedRunningTime="2025-12-03 11:16:31.552822838 +0000 UTC m=+1402.582824082" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.631460 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 11:16:31 crc kubenswrapper[4756]: E1203 11:16:31.632400 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33" containerName="nova-cell0-conductor-db-sync" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.632434 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33" containerName="nova-cell0-conductor-db-sync" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.632781 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33" containerName="nova-cell0-conductor-db-sync" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.633885 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.639366 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vzw9n" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.643507 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.668158 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.748241 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c862336-6a53-419e-aaf8-f64358150259-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9c862336-6a53-419e-aaf8-f64358150259\") " pod="openstack/nova-cell0-conductor-0" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.748395 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c862336-6a53-419e-aaf8-f64358150259-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9c862336-6a53-419e-aaf8-f64358150259\") " pod="openstack/nova-cell0-conductor-0" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.749074 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c56s\" (UniqueName: \"kubernetes.io/projected/9c862336-6a53-419e-aaf8-f64358150259-kube-api-access-5c56s\") pod \"nova-cell0-conductor-0\" (UID: \"9c862336-6a53-419e-aaf8-f64358150259\") " pod="openstack/nova-cell0-conductor-0" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.852119 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c862336-6a53-419e-aaf8-f64358150259-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9c862336-6a53-419e-aaf8-f64358150259\") " pod="openstack/nova-cell0-conductor-0" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.852347 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c862336-6a53-419e-aaf8-f64358150259-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9c862336-6a53-419e-aaf8-f64358150259\") " pod="openstack/nova-cell0-conductor-0" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.853040 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c56s\" (UniqueName: \"kubernetes.io/projected/9c862336-6a53-419e-aaf8-f64358150259-kube-api-access-5c56s\") pod \"nova-cell0-conductor-0\" (UID: \"9c862336-6a53-419e-aaf8-f64358150259\") " pod="openstack/nova-cell0-conductor-0" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.857984 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c862336-6a53-419e-aaf8-f64358150259-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9c862336-6a53-419e-aaf8-f64358150259\") " pod="openstack/nova-cell0-conductor-0" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.860670 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c862336-6a53-419e-aaf8-f64358150259-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9c862336-6a53-419e-aaf8-f64358150259\") " pod="openstack/nova-cell0-conductor-0" Dec 03 11:16:31 crc kubenswrapper[4756]: I1203 11:16:31.873046 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c56s\" (UniqueName: \"kubernetes.io/projected/9c862336-6a53-419e-aaf8-f64358150259-kube-api-access-5c56s\") pod \"nova-cell0-conductor-0\" (UID: \"9c862336-6a53-419e-aaf8-f64358150259\") " pod="openstack/nova-cell0-conductor-0" Dec 03 11:16:32 crc kubenswrapper[4756]: I1203 11:16:32.016892 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 03 11:16:32 crc kubenswrapper[4756]: I1203 11:16:32.583516 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 11:16:32 crc kubenswrapper[4756]: W1203 11:16:32.595907 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c862336_6a53_419e_aaf8_f64358150259.slice/crio-9f784429c328945880078f3f134d843f95ce0b38884562cf81f0ce6445c28b3c WatchSource:0}: Error finding container 9f784429c328945880078f3f134d843f95ce0b38884562cf81f0ce6445c28b3c: Status 404 returned error can't find the container with id 9f784429c328945880078f3f134d843f95ce0b38884562cf81f0ce6445c28b3c Dec 03 11:16:33 crc kubenswrapper[4756]: I1203 11:16:33.409412 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9c862336-6a53-419e-aaf8-f64358150259","Type":"ContainerStarted","Data":"7b161e3a424c61a22d9619d5ce851f78246ad1affda41343341c2f7285387f74"} Dec 03 11:16:33 crc kubenswrapper[4756]: I1203 11:16:33.410023 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9c862336-6a53-419e-aaf8-f64358150259","Type":"ContainerStarted","Data":"9f784429c328945880078f3f134d843f95ce0b38884562cf81f0ce6445c28b3c"} Dec 03 11:16:33 crc kubenswrapper[4756]: I1203 11:16:33.410153 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 03 11:16:33 crc kubenswrapper[4756]: I1203 11:16:33.429061 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.429040189 podStartE2EDuration="2.429040189s" podCreationTimestamp="2025-12-03 11:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:16:33.426221391 +0000 UTC m=+1404.456222665" watchObservedRunningTime="2025-12-03 11:16:33.429040189 +0000 UTC m=+1404.459041433" Dec 03 11:16:39 crc kubenswrapper[4756]: I1203 11:16:39.925132 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5f6f9857bb-rncmk" podUID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.051914 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.543882 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-6pnv6"] Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.548145 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.552260 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.552272 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.564849 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6pnv6"] Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.673065 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-config-data\") pod \"nova-cell0-cell-mapping-6pnv6\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.673713 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-scripts\") pod \"nova-cell0-cell-mapping-6pnv6\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.692389 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6pnv6\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.695935 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqkhv\" (UniqueName: \"kubernetes.io/projected/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-kube-api-access-zqkhv\") pod \"nova-cell0-cell-mapping-6pnv6\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.778159 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.781034 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.785972 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.789909 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.806436 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-logs\") pod \"nova-metadata-0\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " pod="openstack/nova-metadata-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.806492 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-config-data\") pod \"nova-cell0-cell-mapping-6pnv6\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.806521 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-config-data\") pod \"nova-metadata-0\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " pod="openstack/nova-metadata-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.806541 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-scripts\") pod \"nova-cell0-cell-mapping-6pnv6\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.806613 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6pnv6\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.806658 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vqpn\" (UniqueName: \"kubernetes.io/projected/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-kube-api-access-6vqpn\") pod \"nova-metadata-0\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " pod="openstack/nova-metadata-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.806710 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqkhv\" (UniqueName: \"kubernetes.io/projected/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-kube-api-access-zqkhv\") pod \"nova-cell0-cell-mapping-6pnv6\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.806752 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " pod="openstack/nova-metadata-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.828156 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-scripts\") pod \"nova-cell0-cell-mapping-6pnv6\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.836069 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6pnv6\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.843714 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-config-data\") pod \"nova-cell0-cell-mapping-6pnv6\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.857980 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqkhv\" (UniqueName: \"kubernetes.io/projected/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-kube-api-access-zqkhv\") pod \"nova-cell0-cell-mapping-6pnv6\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.908747 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-config-data\") pod \"nova-metadata-0\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " pod="openstack/nova-metadata-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.909139 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vqpn\" (UniqueName: \"kubernetes.io/projected/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-kube-api-access-6vqpn\") pod \"nova-metadata-0\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " pod="openstack/nova-metadata-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.909332 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " pod="openstack/nova-metadata-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.909390 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-logs\") pod \"nova-metadata-0\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " pod="openstack/nova-metadata-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.909863 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-logs\") pod \"nova-metadata-0\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " pod="openstack/nova-metadata-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.913823 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-config-data\") pod \"nova-metadata-0\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " pod="openstack/nova-metadata-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.915361 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " pod="openstack/nova-metadata-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.915375 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.946287 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.948742 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.956466 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.967735 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vqpn\" (UniqueName: \"kubernetes.io/projected/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-kube-api-access-6vqpn\") pod \"nova-metadata-0\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " pod="openstack/nova-metadata-0" Dec 03 11:16:42 crc kubenswrapper[4756]: I1203 11:16:42.991782 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.025497 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f59def-c3cb-4d35-b553-9566c9bc0b53-logs\") pod \"nova-api-0\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " pod="openstack/nova-api-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.025594 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f59def-c3cb-4d35-b553-9566c9bc0b53-config-data\") pod \"nova-api-0\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " pod="openstack/nova-api-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.025670 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhbbp\" (UniqueName: \"kubernetes.io/projected/14f59def-c3cb-4d35-b553-9566c9bc0b53-kube-api-access-fhbbp\") pod \"nova-api-0\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " pod="openstack/nova-api-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.025758 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f59def-c3cb-4d35-b553-9566c9bc0b53-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " pod="openstack/nova-api-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.081628 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.083810 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.115289 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.116380 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.131342 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f59def-c3cb-4d35-b553-9566c9bc0b53-logs\") pod \"nova-api-0\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " pod="openstack/nova-api-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.131410 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f59def-c3cb-4d35-b553-9566c9bc0b53-config-data\") pod \"nova-api-0\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " pod="openstack/nova-api-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.131574 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2vps\" (UniqueName: \"kubernetes.io/projected/7f3dac4c-efe2-405f-ade3-b2464752ebd4-kube-api-access-f2vps\") pod \"nova-cell1-novncproxy-0\" (UID: \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.131666 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhbbp\" (UniqueName: \"kubernetes.io/projected/14f59def-c3cb-4d35-b553-9566c9bc0b53-kube-api-access-fhbbp\") pod \"nova-api-0\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " pod="openstack/nova-api-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.131737 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3dac4c-efe2-405f-ade3-b2464752ebd4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.131796 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3dac4c-efe2-405f-ade3-b2464752ebd4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.131980 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f59def-c3cb-4d35-b553-9566c9bc0b53-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " pod="openstack/nova-api-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.132336 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-f5wpg"] Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.136132 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f59def-c3cb-4d35-b553-9566c9bc0b53-logs\") pod \"nova-api-0\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " pod="openstack/nova-api-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.170603 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhbbp\" (UniqueName: \"kubernetes.io/projected/14f59def-c3cb-4d35-b553-9566c9bc0b53-kube-api-access-fhbbp\") pod \"nova-api-0\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " pod="openstack/nova-api-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.172213 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f59def-c3cb-4d35-b553-9566c9bc0b53-config-data\") pod \"nova-api-0\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " pod="openstack/nova-api-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.173034 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f59def-c3cb-4d35-b553-9566c9bc0b53-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " pod="openstack/nova-api-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.236569 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2vps\" (UniqueName: \"kubernetes.io/projected/7f3dac4c-efe2-405f-ade3-b2464752ebd4-kube-api-access-f2vps\") pod \"nova-cell1-novncproxy-0\" (UID: \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.236645 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3dac4c-efe2-405f-ade3-b2464752ebd4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.236670 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3dac4c-efe2-405f-ade3-b2464752ebd4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.244153 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3dac4c-efe2-405f-ade3-b2464752ebd4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.250599 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3dac4c-efe2-405f-ade3-b2464752ebd4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.262759 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2vps\" (UniqueName: \"kubernetes.io/projected/7f3dac4c-efe2-405f-ade3-b2464752ebd4-kube-api-access-f2vps\") pod \"nova-cell1-novncproxy-0\" (UID: \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.471359 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.487466 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.493672 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.544668 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.544946 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-dns-svc\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.545116 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.545212 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-config\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.545429 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.545491 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdmrj\" (UniqueName: \"kubernetes.io/projected/805494b2-fcad-4208-895c-4b2708a9d129-kube-api-access-gdmrj\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.568893 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.568937 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-f5wpg"] Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.569157 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.572267 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.572433 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.579324 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.650494 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-config\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.650619 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.650743 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.650787 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdmrj\" (UniqueName: \"kubernetes.io/projected/805494b2-fcad-4208-895c-4b2708a9d129-kube-api-access-gdmrj\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.650836 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.650930 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-config-data\") pod \"nova-scheduler-0\" (UID: \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.650999 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-dns-svc\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.651070 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.651109 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkvgz\" (UniqueName: \"kubernetes.io/projected/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-kube-api-access-dkvgz\") pod \"nova-scheduler-0\" (UID: \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.652600 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-config\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.655158 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.655373 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-dns-svc\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.655564 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.658314 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.687920 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdmrj\" (UniqueName: \"kubernetes.io/projected/805494b2-fcad-4208-895c-4b2708a9d129-kube-api-access-gdmrj\") pod \"dnsmasq-dns-757b4f8459-f5wpg\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.753429 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-config-data\") pod \"nova-scheduler-0\" (UID: \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.753799 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkvgz\" (UniqueName: \"kubernetes.io/projected/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-kube-api-access-dkvgz\") pod \"nova-scheduler-0\" (UID: \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.753860 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.759307 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-config-data\") pod \"nova-scheduler-0\" (UID: \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.777426 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.783686 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkvgz\" (UniqueName: \"kubernetes.io/projected/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-kube-api-access-dkvgz\") pod \"nova-scheduler-0\" (UID: \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.876670 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6pnv6"] Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.896633 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:43 crc kubenswrapper[4756]: I1203 11:16:43.916673 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.158797 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:16:44 crc kubenswrapper[4756]: W1203 11:16:44.181791 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2567590b_08a6_4a2b_b2b2_0d2fd2e4a2dd.slice/crio-aff788b53dbe317b7c0f15bea8849de5752ff34331885d5795ef8ca869df6f47 WatchSource:0}: Error finding container aff788b53dbe317b7c0f15bea8849de5752ff34331885d5795ef8ca869df6f47: Status 404 returned error can't find the container with id aff788b53dbe317b7c0f15bea8849de5752ff34331885d5795ef8ca869df6f47 Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.237832 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7t27"] Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.241182 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.247265 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.247620 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.258637 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7t27"] Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.287967 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.367657 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.376918 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-scripts\") pod \"nova-cell1-conductor-db-sync-x7t27\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.377023 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-config-data\") pod \"nova-cell1-conductor-db-sync-x7t27\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.377129 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x7t27\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.377231 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xx5n\" (UniqueName: \"kubernetes.io/projected/0efda8e8-882b-44b9-9bcd-479286328ec1-kube-api-access-6xx5n\") pod \"nova-cell1-conductor-db-sync-x7t27\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.481716 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x7t27\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.482139 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xx5n\" (UniqueName: \"kubernetes.io/projected/0efda8e8-882b-44b9-9bcd-479286328ec1-kube-api-access-6xx5n\") pod \"nova-cell1-conductor-db-sync-x7t27\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.482921 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-scripts\") pod \"nova-cell1-conductor-db-sync-x7t27\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.483065 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-config-data\") pod \"nova-cell1-conductor-db-sync-x7t27\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.493448 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-config-data\") pod \"nova-cell1-conductor-db-sync-x7t27\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.493873 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x7t27\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.497643 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-scripts\") pod \"nova-cell1-conductor-db-sync-x7t27\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.517377 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xx5n\" (UniqueName: \"kubernetes.io/projected/0efda8e8-882b-44b9-9bcd-479286328ec1-kube-api-access-6xx5n\") pod \"nova-cell1-conductor-db-sync-x7t27\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.532731 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.588746 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.605895 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-f5wpg"] Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.682457 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14f59def-c3cb-4d35-b553-9566c9bc0b53","Type":"ContainerStarted","Data":"6f09bffcd0f66cbf72843e193af3d8e807023cc6cc140c7379f2b71c344cc263"} Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.684272 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" event={"ID":"805494b2-fcad-4208-895c-4b2708a9d129","Type":"ContainerStarted","Data":"b7c991f5e93a06dd74eb990fa384da2991446fe2a5124c44f9d85323202772f1"} Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.694076 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6pnv6" event={"ID":"de2c45dd-c649-4e7f-bdcb-259bbc663d8c","Type":"ContainerStarted","Data":"5acb5aea66d13bf3cba991c192e7cf1d5c437bd45fed9f1127abef1914bade44"} Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.694135 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6pnv6" event={"ID":"de2c45dd-c649-4e7f-bdcb-259bbc663d8c","Type":"ContainerStarted","Data":"4180a63ae6110c59838a145ac402b0e6bdc70c7aeb9ddc69242a4727d7f1be29"} Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.697328 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0ca2fd3-246d-46d0-bfd3-17aca288ab94","Type":"ContainerStarted","Data":"ebc4ed819708c2a7f63cc1d9bb4e5defcc9221e3dd4c0d52fcd8c976287e802d"} Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.702913 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd","Type":"ContainerStarted","Data":"aff788b53dbe317b7c0f15bea8849de5752ff34331885d5795ef8ca869df6f47"} Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.706405 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7f3dac4c-efe2-405f-ade3-b2464752ebd4","Type":"ContainerStarted","Data":"837871ff9af0cb32303870680077d39bcb1306b4571cbc18c9b8db801b76bab0"} Dec 03 11:16:44 crc kubenswrapper[4756]: I1203 11:16:44.721324 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-6pnv6" podStartSLOduration=2.721296728 podStartE2EDuration="2.721296728s" podCreationTimestamp="2025-12-03 11:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:16:44.718857342 +0000 UTC m=+1415.748858586" watchObservedRunningTime="2025-12-03 11:16:44.721296728 +0000 UTC m=+1415.751297972" Dec 03 11:16:45 crc kubenswrapper[4756]: I1203 11:16:45.182031 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7t27"] Dec 03 11:16:45 crc kubenswrapper[4756]: W1203 11:16:45.298173 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0efda8e8_882b_44b9_9bcd_479286328ec1.slice/crio-6e3b02cc192dd70b3ae05668e8280c698a05777393220b2175b234600b5ab403 WatchSource:0}: Error finding container 6e3b02cc192dd70b3ae05668e8280c698a05777393220b2175b234600b5ab403: Status 404 returned error can't find the container with id 6e3b02cc192dd70b3ae05668e8280c698a05777393220b2175b234600b5ab403 Dec 03 11:16:45 crc kubenswrapper[4756]: I1203 11:16:45.718775 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x7t27" event={"ID":"0efda8e8-882b-44b9-9bcd-479286328ec1","Type":"ContainerStarted","Data":"6e3b02cc192dd70b3ae05668e8280c698a05777393220b2175b234600b5ab403"} Dec 03 11:16:45 crc kubenswrapper[4756]: I1203 11:16:45.723416 4756 generic.go:334] "Generic (PLEG): container finished" podID="805494b2-fcad-4208-895c-4b2708a9d129" containerID="88533c338ed62592616b04a9aa966f0e8401c3f5cf7a1f1383f692d2b6709d60" exitCode=0 Dec 03 11:16:45 crc kubenswrapper[4756]: I1203 11:16:45.723543 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" event={"ID":"805494b2-fcad-4208-895c-4b2708a9d129","Type":"ContainerDied","Data":"88533c338ed62592616b04a9aa966f0e8401c3f5cf7a1f1383f692d2b6709d60"} Dec 03 11:16:46 crc kubenswrapper[4756]: I1203 11:16:46.645192 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 11:16:46 crc kubenswrapper[4756]: I1203 11:16:46.661486 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:16:46 crc kubenswrapper[4756]: I1203 11:16:46.738980 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" event={"ID":"805494b2-fcad-4208-895c-4b2708a9d129","Type":"ContainerStarted","Data":"dcd3bff036886b61e12db8c718f70ff40285a681f76eccc016126fc7346349ca"} Dec 03 11:16:46 crc kubenswrapper[4756]: I1203 11:16:46.739274 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:46 crc kubenswrapper[4756]: I1203 11:16:46.741390 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x7t27" event={"ID":"0efda8e8-882b-44b9-9bcd-479286328ec1","Type":"ContainerStarted","Data":"392b2c1a8792bc2b3afd4026e934079ec84f38df9f74894e9bdec53e8c2734ab"} Dec 03 11:16:46 crc kubenswrapper[4756]: I1203 11:16:46.764940 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" podStartSLOduration=4.764922001 podStartE2EDuration="4.764922001s" podCreationTimestamp="2025-12-03 11:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:16:46.762154945 +0000 UTC m=+1417.792156209" watchObservedRunningTime="2025-12-03 11:16:46.764922001 +0000 UTC m=+1417.794923235" Dec 03 11:16:46 crc kubenswrapper[4756]: I1203 11:16:46.800209 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-x7t27" podStartSLOduration=2.800181139 podStartE2EDuration="2.800181139s" podCreationTimestamp="2025-12-03 11:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:16:46.794537553 +0000 UTC m=+1417.824538797" watchObservedRunningTime="2025-12-03 11:16:46.800181139 +0000 UTC m=+1417.830182383" Dec 03 11:16:49 crc kubenswrapper[4756]: I1203 11:16:49.925418 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5f6f9857bb-rncmk" podUID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Dec 03 11:16:49 crc kubenswrapper[4756]: I1203 11:16:49.926047 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:16:51 crc kubenswrapper[4756]: I1203 11:16:51.825322 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7f3dac4c-efe2-405f-ade3-b2464752ebd4","Type":"ContainerStarted","Data":"32804e86654fcb2ed557652a35fad3ee4b631c0de9a58643d1aa3c8cb011871d"} Dec 03 11:16:51 crc kubenswrapper[4756]: I1203 11:16:51.825516 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="7f3dac4c-efe2-405f-ade3-b2464752ebd4" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://32804e86654fcb2ed557652a35fad3ee4b631c0de9a58643d1aa3c8cb011871d" gracePeriod=30 Dec 03 11:16:51 crc kubenswrapper[4756]: I1203 11:16:51.834977 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0ca2fd3-246d-46d0-bfd3-17aca288ab94","Type":"ContainerStarted","Data":"17bc5e9f65788ca5d0d2fd023b4c73399b871bd5f0998d289574078f8664369c"} Dec 03 11:16:51 crc kubenswrapper[4756]: I1203 11:16:51.841460 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14f59def-c3cb-4d35-b553-9566c9bc0b53","Type":"ContainerStarted","Data":"fa10502fb2f956d38dd7a5de2c8089414046d4a8036143f0dd920cd048fa3519"} Dec 03 11:16:51 crc kubenswrapper[4756]: I1203 11:16:51.841531 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14f59def-c3cb-4d35-b553-9566c9bc0b53","Type":"ContainerStarted","Data":"22c6f9b52daf3dd7fd1851f902268e81e552dde8993d5d4d8ae0c68119e6370d"} Dec 03 11:16:51 crc kubenswrapper[4756]: I1203 11:16:51.858629 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd","Type":"ContainerStarted","Data":"d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7"} Dec 03 11:16:51 crc kubenswrapper[4756]: I1203 11:16:51.858694 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd","Type":"ContainerStarted","Data":"9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4"} Dec 03 11:16:51 crc kubenswrapper[4756]: I1203 11:16:51.858759 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" containerName="nova-metadata-log" containerID="cri-o://9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4" gracePeriod=30 Dec 03 11:16:51 crc kubenswrapper[4756]: I1203 11:16:51.858829 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" containerName="nova-metadata-metadata" containerID="cri-o://d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7" gracePeriod=30 Dec 03 11:16:51 crc kubenswrapper[4756]: I1203 11:16:51.873569 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.769718209 podStartE2EDuration="9.873544536s" podCreationTimestamp="2025-12-03 11:16:42 +0000 UTC" firstStartedPulling="2025-12-03 11:16:44.273500962 +0000 UTC m=+1415.303502206" lastFinishedPulling="2025-12-03 11:16:50.377327289 +0000 UTC m=+1421.407328533" observedRunningTime="2025-12-03 11:16:51.849492087 +0000 UTC m=+1422.879493341" watchObservedRunningTime="2025-12-03 11:16:51.873544536 +0000 UTC m=+1422.903545780" Dec 03 11:16:51 crc kubenswrapper[4756]: I1203 11:16:51.875085 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.021610912 podStartE2EDuration="8.875077643s" podCreationTimestamp="2025-12-03 11:16:43 +0000 UTC" firstStartedPulling="2025-12-03 11:16:44.534784919 +0000 UTC m=+1415.564786163" lastFinishedPulling="2025-12-03 11:16:50.38825165 +0000 UTC m=+1421.418252894" observedRunningTime="2025-12-03 11:16:51.872070599 +0000 UTC m=+1422.902071863" watchObservedRunningTime="2025-12-03 11:16:51.875077643 +0000 UTC m=+1422.905078887" Dec 03 11:16:51 crc kubenswrapper[4756]: I1203 11:16:51.894803 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.903332239 podStartE2EDuration="9.894782247s" podCreationTimestamp="2025-12-03 11:16:42 +0000 UTC" firstStartedPulling="2025-12-03 11:16:44.384163298 +0000 UTC m=+1415.414164532" lastFinishedPulling="2025-12-03 11:16:50.375613296 +0000 UTC m=+1421.405614540" observedRunningTime="2025-12-03 11:16:51.892217667 +0000 UTC m=+1422.922218931" watchObservedRunningTime="2025-12-03 11:16:51.894782247 +0000 UTC m=+1422.924783491" Dec 03 11:16:51 crc kubenswrapper[4756]: I1203 11:16:51.920557 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.7532144240000003 podStartE2EDuration="9.920534219s" podCreationTimestamp="2025-12-03 11:16:42 +0000 UTC" firstStartedPulling="2025-12-03 11:16:44.209429237 +0000 UTC m=+1415.239430481" lastFinishedPulling="2025-12-03 11:16:50.376749032 +0000 UTC m=+1421.406750276" observedRunningTime="2025-12-03 11:16:51.916917266 +0000 UTC m=+1422.946918510" watchObservedRunningTime="2025-12-03 11:16:51.920534219 +0000 UTC m=+1422.950535463" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.552738 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.556630 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-combined-ca-bundle\") pod \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.556682 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vqpn\" (UniqueName: \"kubernetes.io/projected/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-kube-api-access-6vqpn\") pod \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.556709 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-config-data\") pod \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.556825 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-logs\") pod \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\" (UID: \"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd\") " Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.557467 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-logs" (OuterVolumeSpecName: "logs") pod "2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" (UID: "2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.564425 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-kube-api-access-6vqpn" (OuterVolumeSpecName: "kube-api-access-6vqpn") pod "2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" (UID: "2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd"). InnerVolumeSpecName "kube-api-access-6vqpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.599378 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-config-data" (OuterVolumeSpecName: "config-data") pod "2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" (UID: "2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.604551 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" (UID: "2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.659989 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.660039 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.660054 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vqpn\" (UniqueName: \"kubernetes.io/projected/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-kube-api-access-6vqpn\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.660063 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.872716 4756 generic.go:334] "Generic (PLEG): container finished" podID="2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" containerID="d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7" exitCode=0 Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.872758 4756 generic.go:334] "Generic (PLEG): container finished" podID="2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" containerID="9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4" exitCode=143 Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.873106 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd","Type":"ContainerDied","Data":"d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7"} Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.873177 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd","Type":"ContainerDied","Data":"9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4"} Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.873188 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd","Type":"ContainerDied","Data":"aff788b53dbe317b7c0f15bea8849de5752ff34331885d5795ef8ca869df6f47"} Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.873209 4756 scope.go:117] "RemoveContainer" containerID="d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.873667 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.923942 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.924161 4756 scope.go:117] "RemoveContainer" containerID="9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.961909 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.972699 4756 scope.go:117] "RemoveContainer" containerID="d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7" Dec 03 11:16:52 crc kubenswrapper[4756]: E1203 11:16:52.973530 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7\": container with ID starting with d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7 not found: ID does not exist" containerID="d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.973572 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7"} err="failed to get container status \"d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7\": rpc error: code = NotFound desc = could not find container \"d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7\": container with ID starting with d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7 not found: ID does not exist" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.974519 4756 scope.go:117] "RemoveContainer" containerID="9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4" Dec 03 11:16:52 crc kubenswrapper[4756]: E1203 11:16:52.975388 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4\": container with ID starting with 9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4 not found: ID does not exist" containerID="9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.975500 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4"} err="failed to get container status \"9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4\": rpc error: code = NotFound desc = could not find container \"9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4\": container with ID starting with 9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4 not found: ID does not exist" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.975597 4756 scope.go:117] "RemoveContainer" containerID="d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.976257 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7"} err="failed to get container status \"d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7\": rpc error: code = NotFound desc = could not find container \"d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7\": container with ID starting with d475047ba0f44e9806d69f87655b9e560cbb7a2ea691fcbeabae2a3b4500b3c7 not found: ID does not exist" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.976366 4756 scope.go:117] "RemoveContainer" containerID="9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.977805 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4"} err="failed to get container status \"9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4\": rpc error: code = NotFound desc = could not find container \"9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4\": container with ID starting with 9531b5fd9a36c0ff5547ec44b3b32ba36442a9de546374b5b39cda8bb4a770b4 not found: ID does not exist" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.983158 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:16:52 crc kubenswrapper[4756]: E1203 11:16:52.983857 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" containerName="nova-metadata-metadata" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.983980 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" containerName="nova-metadata-metadata" Dec 03 11:16:52 crc kubenswrapper[4756]: E1203 11:16:52.984329 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" containerName="nova-metadata-log" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.984412 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" containerName="nova-metadata-log" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.984750 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" containerName="nova-metadata-metadata" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.984855 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" containerName="nova-metadata-log" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.986993 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.993224 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.993611 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 11:16:52 crc kubenswrapper[4756]: I1203 11:16:52.999530 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.081615 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dba74e4-794c-4822-bc88-e9edbf99dc06-logs\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.081767 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.081888 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.081933 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-config-data\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.082008 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4qvx\" (UniqueName: \"kubernetes.io/projected/4dba74e4-794c-4822-bc88-e9edbf99dc06-kube-api-access-r4qvx\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.184785 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dba74e4-794c-4822-bc88-e9edbf99dc06-logs\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.184922 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.185069 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.185119 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-config-data\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.185180 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4qvx\" (UniqueName: \"kubernetes.io/projected/4dba74e4-794c-4822-bc88-e9edbf99dc06-kube-api-access-r4qvx\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.186446 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dba74e4-794c-4822-bc88-e9edbf99dc06-logs\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.190943 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.191811 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-config-data\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.203837 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.212015 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4qvx\" (UniqueName: \"kubernetes.io/projected/4dba74e4-794c-4822-bc88-e9edbf99dc06-kube-api-access-r4qvx\") pod \"nova-metadata-0\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.252059 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd" path="/var/lib/kubelet/pods/2567590b-08a6-4a2b-b2b2-0d2fd2e4a2dd/volumes" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.320482 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.472566 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.473075 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.488697 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.855405 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.892561 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dba74e4-794c-4822-bc88-e9edbf99dc06","Type":"ContainerStarted","Data":"bd8cfa4f1bb4c9871d99a6f581db8cb417426f510a1bc08c1f34ef326d2d5e9c"} Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.899728 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.916833 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 11:16:53 crc kubenswrapper[4756]: I1203 11:16:53.916899 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.033731 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nc9r6"] Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.034529 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" podUID="3582e52f-4806-401b-a822-6a98777de800" containerName="dnsmasq-dns" containerID="cri-o://8cba40079cd98cf117961a71cc7e2f86cd16ab55ab1bd7e13792b484a85cda3d" gracePeriod=10 Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.098290 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.563356 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="14f59def-c3cb-4d35-b553-9566c9bc0b53" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.563424 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="14f59def-c3cb-4d35-b553-9566c9bc0b53" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.571872 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.753177 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-ovsdbserver-sb\") pod \"3582e52f-4806-401b-a822-6a98777de800\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.754067 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-config\") pod \"3582e52f-4806-401b-a822-6a98777de800\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.754233 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg9b6\" (UniqueName: \"kubernetes.io/projected/3582e52f-4806-401b-a822-6a98777de800-kube-api-access-tg9b6\") pod \"3582e52f-4806-401b-a822-6a98777de800\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.754273 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-dns-svc\") pod \"3582e52f-4806-401b-a822-6a98777de800\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.754330 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-ovsdbserver-nb\") pod \"3582e52f-4806-401b-a822-6a98777de800\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.754398 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-dns-swift-storage-0\") pod \"3582e52f-4806-401b-a822-6a98777de800\" (UID: \"3582e52f-4806-401b-a822-6a98777de800\") " Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.761770 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3582e52f-4806-401b-a822-6a98777de800-kube-api-access-tg9b6" (OuterVolumeSpecName: "kube-api-access-tg9b6") pod "3582e52f-4806-401b-a822-6a98777de800" (UID: "3582e52f-4806-401b-a822-6a98777de800"). InnerVolumeSpecName "kube-api-access-tg9b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.893030 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg9b6\" (UniqueName: \"kubernetes.io/projected/3582e52f-4806-401b-a822-6a98777de800-kube-api-access-tg9b6\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.914890 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3582e52f-4806-401b-a822-6a98777de800" (UID: "3582e52f-4806-401b-a822-6a98777de800"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.919995 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3582e52f-4806-401b-a822-6a98777de800" (UID: "3582e52f-4806-401b-a822-6a98777de800"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.920830 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3582e52f-4806-401b-a822-6a98777de800" (UID: "3582e52f-4806-401b-a822-6a98777de800"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.921592 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-config" (OuterVolumeSpecName: "config") pod "3582e52f-4806-401b-a822-6a98777de800" (UID: "3582e52f-4806-401b-a822-6a98777de800"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.925090 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3582e52f-4806-401b-a822-6a98777de800" (UID: "3582e52f-4806-401b-a822-6a98777de800"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.927131 4756 generic.go:334] "Generic (PLEG): container finished" podID="3582e52f-4806-401b-a822-6a98777de800" containerID="8cba40079cd98cf117961a71cc7e2f86cd16ab55ab1bd7e13792b484a85cda3d" exitCode=0 Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.927209 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" event={"ID":"3582e52f-4806-401b-a822-6a98777de800","Type":"ContainerDied","Data":"8cba40079cd98cf117961a71cc7e2f86cd16ab55ab1bd7e13792b484a85cda3d"} Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.927270 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" event={"ID":"3582e52f-4806-401b-a822-6a98777de800","Type":"ContainerDied","Data":"25c6dfb4854023460618ae07bbd436b233aa44436b8c16eb79332b83417bbf5d"} Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.927293 4756 scope.go:117] "RemoveContainer" containerID="8cba40079cd98cf117961a71cc7e2f86cd16ab55ab1bd7e13792b484a85cda3d" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.927452 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nc9r6" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.953810 4756 generic.go:334] "Generic (PLEG): container finished" podID="de2c45dd-c649-4e7f-bdcb-259bbc663d8c" containerID="5acb5aea66d13bf3cba991c192e7cf1d5c437bd45fed9f1127abef1914bade44" exitCode=0 Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.953899 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6pnv6" event={"ID":"de2c45dd-c649-4e7f-bdcb-259bbc663d8c","Type":"ContainerDied","Data":"5acb5aea66d13bf3cba991c192e7cf1d5c437bd45fed9f1127abef1914bade44"} Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.961737 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dba74e4-794c-4822-bc88-e9edbf99dc06","Type":"ContainerStarted","Data":"7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90"} Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.995435 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.995477 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.995497 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.995507 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:54 crc kubenswrapper[4756]: I1203 11:16:54.995517 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3582e52f-4806-401b-a822-6a98777de800-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.000979 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.061328 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nc9r6"] Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.061797 4756 scope.go:117] "RemoveContainer" containerID="3783de38208ecb2938fb168f5806e4de388c223fe6929b72fba33ae68c374e26" Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.075043 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nc9r6"] Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.095218 4756 scope.go:117] "RemoveContainer" containerID="8cba40079cd98cf117961a71cc7e2f86cd16ab55ab1bd7e13792b484a85cda3d" Dec 03 11:16:55 crc kubenswrapper[4756]: E1203 11:16:55.099229 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cba40079cd98cf117961a71cc7e2f86cd16ab55ab1bd7e13792b484a85cda3d\": container with ID starting with 8cba40079cd98cf117961a71cc7e2f86cd16ab55ab1bd7e13792b484a85cda3d not found: ID does not exist" containerID="8cba40079cd98cf117961a71cc7e2f86cd16ab55ab1bd7e13792b484a85cda3d" Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.099289 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cba40079cd98cf117961a71cc7e2f86cd16ab55ab1bd7e13792b484a85cda3d"} err="failed to get container status \"8cba40079cd98cf117961a71cc7e2f86cd16ab55ab1bd7e13792b484a85cda3d\": rpc error: code = NotFound desc = could not find container \"8cba40079cd98cf117961a71cc7e2f86cd16ab55ab1bd7e13792b484a85cda3d\": container with ID starting with 8cba40079cd98cf117961a71cc7e2f86cd16ab55ab1bd7e13792b484a85cda3d not found: ID does not exist" Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.099324 4756 scope.go:117] "RemoveContainer" containerID="3783de38208ecb2938fb168f5806e4de388c223fe6929b72fba33ae68c374e26" Dec 03 11:16:55 crc kubenswrapper[4756]: E1203 11:16:55.100146 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3783de38208ecb2938fb168f5806e4de388c223fe6929b72fba33ae68c374e26\": container with ID starting with 3783de38208ecb2938fb168f5806e4de388c223fe6929b72fba33ae68c374e26 not found: ID does not exist" containerID="3783de38208ecb2938fb168f5806e4de388c223fe6929b72fba33ae68c374e26" Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.100219 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3783de38208ecb2938fb168f5806e4de388c223fe6929b72fba33ae68c374e26"} err="failed to get container status \"3783de38208ecb2938fb168f5806e4de388c223fe6929b72fba33ae68c374e26\": rpc error: code = NotFound desc = could not find container \"3783de38208ecb2938fb168f5806e4de388c223fe6929b72fba33ae68c374e26\": container with ID starting with 3783de38208ecb2938fb168f5806e4de388c223fe6929b72fba33ae68c374e26 not found: ID does not exist" Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.259090 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3582e52f-4806-401b-a822-6a98777de800" path="/var/lib/kubelet/pods/3582e52f-4806-401b-a822-6a98777de800/volumes" Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.722689 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.926943 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-combined-ca-bundle\") pod \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.927063 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-scripts\") pod \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.927140 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-horizon-secret-key\") pod \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.927172 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-config-data\") pod \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.927398 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lwwm\" (UniqueName: \"kubernetes.io/projected/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-kube-api-access-5lwwm\") pod \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.927497 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-horizon-tls-certs\") pod \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.928380 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-logs\") pod \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\" (UID: \"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3\") " Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.938080 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-logs" (OuterVolumeSpecName: "logs") pod "1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" (UID: "1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.983241 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-kube-api-access-5lwwm" (OuterVolumeSpecName: "kube-api-access-5lwwm") pod "1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" (UID: "1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3"). InnerVolumeSpecName "kube-api-access-5lwwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.992792 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" (UID: "1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:55 crc kubenswrapper[4756]: I1203 11:16:55.995826 4756 generic.go:334] "Generic (PLEG): container finished" podID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerID="c653d5bf2c3fc4688bf9de690d9cb7282a458344e3d96b4a15f3ba05f2b6ecb5" exitCode=137 Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.000978 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f6f9857bb-rncmk" event={"ID":"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3","Type":"ContainerDied","Data":"c653d5bf2c3fc4688bf9de690d9cb7282a458344e3d96b4a15f3ba05f2b6ecb5"} Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.001392 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f6f9857bb-rncmk" event={"ID":"1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3","Type":"ContainerDied","Data":"29b135dc5193c36bf4d0d3801e8728a8bb216c6c9f4bf31bda97e2079a723df2"} Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.001494 4756 scope.go:117] "RemoveContainer" containerID="7ce0400b24e755e6d3cb0e45b9b47395b1f68edbb781cc9da8036fd050990808" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.001851 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f6f9857bb-rncmk" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.005510 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dba74e4-794c-4822-bc88-e9edbf99dc06","Type":"ContainerStarted","Data":"a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3"} Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.005546 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" (UID: "1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.007788 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-scripts" (OuterVolumeSpecName: "scripts") pod "1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" (UID: "1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.045323 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lwwm\" (UniqueName: \"kubernetes.io/projected/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-kube-api-access-5lwwm\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.045372 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.045386 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.045396 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.045406 4756 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.053692 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-config-data" (OuterVolumeSpecName: "config-data") pod "1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" (UID: "1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.053741 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.053712476 podStartE2EDuration="4.053712476s" podCreationTimestamp="2025-12-03 11:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:16:56.042079014 +0000 UTC m=+1427.072080258" watchObservedRunningTime="2025-12-03 11:16:56.053712476 +0000 UTC m=+1427.083713730" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.066582 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" (UID: "1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.148779 4756 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.149344 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.348469 4756 scope.go:117] "RemoveContainer" containerID="c653d5bf2c3fc4688bf9de690d9cb7282a458344e3d96b4a15f3ba05f2b6ecb5" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.355246 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5f6f9857bb-rncmk"] Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.372186 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5f6f9857bb-rncmk"] Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.392579 4756 scope.go:117] "RemoveContainer" containerID="7ce0400b24e755e6d3cb0e45b9b47395b1f68edbb781cc9da8036fd050990808" Dec 03 11:16:56 crc kubenswrapper[4756]: E1203 11:16:56.393339 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ce0400b24e755e6d3cb0e45b9b47395b1f68edbb781cc9da8036fd050990808\": container with ID starting with 7ce0400b24e755e6d3cb0e45b9b47395b1f68edbb781cc9da8036fd050990808 not found: ID does not exist" containerID="7ce0400b24e755e6d3cb0e45b9b47395b1f68edbb781cc9da8036fd050990808" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.393401 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ce0400b24e755e6d3cb0e45b9b47395b1f68edbb781cc9da8036fd050990808"} err="failed to get container status \"7ce0400b24e755e6d3cb0e45b9b47395b1f68edbb781cc9da8036fd050990808\": rpc error: code = NotFound desc = could not find container \"7ce0400b24e755e6d3cb0e45b9b47395b1f68edbb781cc9da8036fd050990808\": container with ID starting with 7ce0400b24e755e6d3cb0e45b9b47395b1f68edbb781cc9da8036fd050990808 not found: ID does not exist" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.393449 4756 scope.go:117] "RemoveContainer" containerID="c653d5bf2c3fc4688bf9de690d9cb7282a458344e3d96b4a15f3ba05f2b6ecb5" Dec 03 11:16:56 crc kubenswrapper[4756]: E1203 11:16:56.394296 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c653d5bf2c3fc4688bf9de690d9cb7282a458344e3d96b4a15f3ba05f2b6ecb5\": container with ID starting with c653d5bf2c3fc4688bf9de690d9cb7282a458344e3d96b4a15f3ba05f2b6ecb5 not found: ID does not exist" containerID="c653d5bf2c3fc4688bf9de690d9cb7282a458344e3d96b4a15f3ba05f2b6ecb5" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.394322 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c653d5bf2c3fc4688bf9de690d9cb7282a458344e3d96b4a15f3ba05f2b6ecb5"} err="failed to get container status \"c653d5bf2c3fc4688bf9de690d9cb7282a458344e3d96b4a15f3ba05f2b6ecb5\": rpc error: code = NotFound desc = could not find container \"c653d5bf2c3fc4688bf9de690d9cb7282a458344e3d96b4a15f3ba05f2b6ecb5\": container with ID starting with c653d5bf2c3fc4688bf9de690d9cb7282a458344e3d96b4a15f3ba05f2b6ecb5 not found: ID does not exist" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.414807 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.462116 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-config-data\") pod \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.462510 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqkhv\" (UniqueName: \"kubernetes.io/projected/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-kube-api-access-zqkhv\") pod \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.462625 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-scripts\") pod \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.462736 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-combined-ca-bundle\") pod \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\" (UID: \"de2c45dd-c649-4e7f-bdcb-259bbc663d8c\") " Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.478278 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-kube-api-access-zqkhv" (OuterVolumeSpecName: "kube-api-access-zqkhv") pod "de2c45dd-c649-4e7f-bdcb-259bbc663d8c" (UID: "de2c45dd-c649-4e7f-bdcb-259bbc663d8c"). InnerVolumeSpecName "kube-api-access-zqkhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.484100 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-scripts" (OuterVolumeSpecName: "scripts") pod "de2c45dd-c649-4e7f-bdcb-259bbc663d8c" (UID: "de2c45dd-c649-4e7f-bdcb-259bbc663d8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.503817 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de2c45dd-c649-4e7f-bdcb-259bbc663d8c" (UID: "de2c45dd-c649-4e7f-bdcb-259bbc663d8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.513845 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-config-data" (OuterVolumeSpecName: "config-data") pod "de2c45dd-c649-4e7f-bdcb-259bbc663d8c" (UID: "de2c45dd-c649-4e7f-bdcb-259bbc663d8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.565618 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.566087 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqkhv\" (UniqueName: \"kubernetes.io/projected/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-kube-api-access-zqkhv\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.566156 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.566223 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de2c45dd-c649-4e7f-bdcb-259bbc663d8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:56 crc kubenswrapper[4756]: I1203 11:16:56.644315 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 03 11:16:57 crc kubenswrapper[4756]: I1203 11:16:57.029601 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6pnv6" event={"ID":"de2c45dd-c649-4e7f-bdcb-259bbc663d8c","Type":"ContainerDied","Data":"4180a63ae6110c59838a145ac402b0e6bdc70c7aeb9ddc69242a4727d7f1be29"} Dec 03 11:16:57 crc kubenswrapper[4756]: I1203 11:16:57.029649 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4180a63ae6110c59838a145ac402b0e6bdc70c7aeb9ddc69242a4727d7f1be29" Dec 03 11:16:57 crc kubenswrapper[4756]: I1203 11:16:57.029729 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6pnv6" Dec 03 11:16:57 crc kubenswrapper[4756]: I1203 11:16:57.177361 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:16:57 crc kubenswrapper[4756]: I1203 11:16:57.178101 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="14f59def-c3cb-4d35-b553-9566c9bc0b53" containerName="nova-api-log" containerID="cri-o://22c6f9b52daf3dd7fd1851f902268e81e552dde8993d5d4d8ae0c68119e6370d" gracePeriod=30 Dec 03 11:16:57 crc kubenswrapper[4756]: I1203 11:16:57.178191 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="14f59def-c3cb-4d35-b553-9566c9bc0b53" containerName="nova-api-api" containerID="cri-o://fa10502fb2f956d38dd7a5de2c8089414046d4a8036143f0dd920cd048fa3519" gracePeriod=30 Dec 03 11:16:57 crc kubenswrapper[4756]: I1203 11:16:57.254526 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" path="/var/lib/kubelet/pods/1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3/volumes" Dec 03 11:16:57 crc kubenswrapper[4756]: I1203 11:16:57.255321 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:16:57 crc kubenswrapper[4756]: I1203 11:16:57.255559 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b0ca2fd3-246d-46d0-bfd3-17aca288ab94" containerName="nova-scheduler-scheduler" containerID="cri-o://17bc5e9f65788ca5d0d2fd023b4c73399b871bd5f0998d289574078f8664369c" gracePeriod=30 Dec 03 11:16:57 crc kubenswrapper[4756]: I1203 11:16:57.265632 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:16:58 crc kubenswrapper[4756]: I1203 11:16:58.062827 4756 generic.go:334] "Generic (PLEG): container finished" podID="14f59def-c3cb-4d35-b553-9566c9bc0b53" containerID="22c6f9b52daf3dd7fd1851f902268e81e552dde8993d5d4d8ae0c68119e6370d" exitCode=143 Dec 03 11:16:58 crc kubenswrapper[4756]: I1203 11:16:58.063167 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4dba74e4-794c-4822-bc88-e9edbf99dc06" containerName="nova-metadata-log" containerID="cri-o://7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90" gracePeriod=30 Dec 03 11:16:58 crc kubenswrapper[4756]: I1203 11:16:58.063636 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14f59def-c3cb-4d35-b553-9566c9bc0b53","Type":"ContainerDied","Data":"22c6f9b52daf3dd7fd1851f902268e81e552dde8993d5d4d8ae0c68119e6370d"} Dec 03 11:16:58 crc kubenswrapper[4756]: I1203 11:16:58.064254 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4dba74e4-794c-4822-bc88-e9edbf99dc06" containerName="nova-metadata-metadata" containerID="cri-o://a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3" gracePeriod=30 Dec 03 11:16:58 crc kubenswrapper[4756]: I1203 11:16:58.334241 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 11:16:58 crc kubenswrapper[4756]: I1203 11:16:58.334357 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 11:16:58 crc kubenswrapper[4756]: I1203 11:16:58.895935 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 11:16:58 crc kubenswrapper[4756]: I1203 11:16:58.904763 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.086053 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-combined-ca-bundle\") pod \"4dba74e4-794c-4822-bc88-e9edbf99dc06\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.086123 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dba74e4-794c-4822-bc88-e9edbf99dc06-logs\") pod \"4dba74e4-794c-4822-bc88-e9edbf99dc06\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.086269 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-combined-ca-bundle\") pod \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\" (UID: \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\") " Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.086337 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-config-data\") pod \"4dba74e4-794c-4822-bc88-e9edbf99dc06\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.086374 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkvgz\" (UniqueName: \"kubernetes.io/projected/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-kube-api-access-dkvgz\") pod \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\" (UID: \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\") " Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.086434 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-nova-metadata-tls-certs\") pod \"4dba74e4-794c-4822-bc88-e9edbf99dc06\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.086476 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-config-data\") pod \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\" (UID: \"b0ca2fd3-246d-46d0-bfd3-17aca288ab94\") " Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.086572 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4qvx\" (UniqueName: \"kubernetes.io/projected/4dba74e4-794c-4822-bc88-e9edbf99dc06-kube-api-access-r4qvx\") pod \"4dba74e4-794c-4822-bc88-e9edbf99dc06\" (UID: \"4dba74e4-794c-4822-bc88-e9edbf99dc06\") " Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.090553 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dba74e4-794c-4822-bc88-e9edbf99dc06-logs" (OuterVolumeSpecName: "logs") pod "4dba74e4-794c-4822-bc88-e9edbf99dc06" (UID: "4dba74e4-794c-4822-bc88-e9edbf99dc06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.095145 4756 generic.go:334] "Generic (PLEG): container finished" podID="4dba74e4-794c-4822-bc88-e9edbf99dc06" containerID="a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3" exitCode=0 Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.095193 4756 generic.go:334] "Generic (PLEG): container finished" podID="4dba74e4-794c-4822-bc88-e9edbf99dc06" containerID="7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90" exitCode=143 Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.095294 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dba74e4-794c-4822-bc88-e9edbf99dc06","Type":"ContainerDied","Data":"a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3"} Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.095335 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dba74e4-794c-4822-bc88-e9edbf99dc06","Type":"ContainerDied","Data":"7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90"} Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.095349 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4dba74e4-794c-4822-bc88-e9edbf99dc06","Type":"ContainerDied","Data":"bd8cfa4f1bb4c9871d99a6f581db8cb417426f510a1bc08c1f34ef326d2d5e9c"} Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.095370 4756 scope.go:117] "RemoveContainer" containerID="a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.095548 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.099204 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-kube-api-access-dkvgz" (OuterVolumeSpecName: "kube-api-access-dkvgz") pod "b0ca2fd3-246d-46d0-bfd3-17aca288ab94" (UID: "b0ca2fd3-246d-46d0-bfd3-17aca288ab94"). InnerVolumeSpecName "kube-api-access-dkvgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.104368 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dba74e4-794c-4822-bc88-e9edbf99dc06-kube-api-access-r4qvx" (OuterVolumeSpecName: "kube-api-access-r4qvx") pod "4dba74e4-794c-4822-bc88-e9edbf99dc06" (UID: "4dba74e4-794c-4822-bc88-e9edbf99dc06"). InnerVolumeSpecName "kube-api-access-r4qvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.107059 4756 generic.go:334] "Generic (PLEG): container finished" podID="b0ca2fd3-246d-46d0-bfd3-17aca288ab94" containerID="17bc5e9f65788ca5d0d2fd023b4c73399b871bd5f0998d289574078f8664369c" exitCode=0 Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.107130 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0ca2fd3-246d-46d0-bfd3-17aca288ab94","Type":"ContainerDied","Data":"17bc5e9f65788ca5d0d2fd023b4c73399b871bd5f0998d289574078f8664369c"} Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.107169 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b0ca2fd3-246d-46d0-bfd3-17aca288ab94","Type":"ContainerDied","Data":"ebc4ed819708c2a7f63cc1d9bb4e5defcc9221e3dd4c0d52fcd8c976287e802d"} Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.107181 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.131867 4756 scope.go:117] "RemoveContainer" containerID="7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.136355 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-config-data" (OuterVolumeSpecName: "config-data") pod "4dba74e4-794c-4822-bc88-e9edbf99dc06" (UID: "4dba74e4-794c-4822-bc88-e9edbf99dc06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.137610 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dba74e4-794c-4822-bc88-e9edbf99dc06" (UID: "4dba74e4-794c-4822-bc88-e9edbf99dc06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.144895 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-config-data" (OuterVolumeSpecName: "config-data") pod "b0ca2fd3-246d-46d0-bfd3-17aca288ab94" (UID: "b0ca2fd3-246d-46d0-bfd3-17aca288ab94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.156809 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0ca2fd3-246d-46d0-bfd3-17aca288ab94" (UID: "b0ca2fd3-246d-46d0-bfd3-17aca288ab94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.162501 4756 scope.go:117] "RemoveContainer" containerID="a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3" Dec 03 11:16:59 crc kubenswrapper[4756]: E1203 11:16:59.163376 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3\": container with ID starting with a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3 not found: ID does not exist" containerID="a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.163425 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3"} err="failed to get container status \"a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3\": rpc error: code = NotFound desc = could not find container \"a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3\": container with ID starting with a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3 not found: ID does not exist" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.163460 4756 scope.go:117] "RemoveContainer" containerID="7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90" Dec 03 11:16:59 crc kubenswrapper[4756]: E1203 11:16:59.164042 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90\": container with ID starting with 7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90 not found: ID does not exist" containerID="7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.164087 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90"} err="failed to get container status \"7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90\": rpc error: code = NotFound desc = could not find container \"7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90\": container with ID starting with 7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90 not found: ID does not exist" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.164113 4756 scope.go:117] "RemoveContainer" containerID="a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.164680 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3"} err="failed to get container status \"a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3\": rpc error: code = NotFound desc = could not find container \"a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3\": container with ID starting with a8590588cfc5fb6c14cba6b796637b347b4503ddee75ac532ac9603cdc80b3c3 not found: ID does not exist" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.164709 4756 scope.go:117] "RemoveContainer" containerID="7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.165412 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90"} err="failed to get container status \"7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90\": rpc error: code = NotFound desc = could not find container \"7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90\": container with ID starting with 7a0197749f2f56c5c66565cb1456a5f06c8623a5dfda3503b22036c505334c90 not found: ID does not exist" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.165470 4756 scope.go:117] "RemoveContainer" containerID="17bc5e9f65788ca5d0d2fd023b4c73399b871bd5f0998d289574078f8664369c" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.191428 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.191487 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dba74e4-794c-4822-bc88-e9edbf99dc06-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.191502 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.191514 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.191528 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkvgz\" (UniqueName: \"kubernetes.io/projected/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-kube-api-access-dkvgz\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.191543 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ca2fd3-246d-46d0-bfd3-17aca288ab94-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.191575 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4qvx\" (UniqueName: \"kubernetes.io/projected/4dba74e4-794c-4822-bc88-e9edbf99dc06-kube-api-access-r4qvx\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.196184 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4dba74e4-794c-4822-bc88-e9edbf99dc06" (UID: "4dba74e4-794c-4822-bc88-e9edbf99dc06"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.203347 4756 scope.go:117] "RemoveContainer" containerID="17bc5e9f65788ca5d0d2fd023b4c73399b871bd5f0998d289574078f8664369c" Dec 03 11:16:59 crc kubenswrapper[4756]: E1203 11:16:59.203908 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17bc5e9f65788ca5d0d2fd023b4c73399b871bd5f0998d289574078f8664369c\": container with ID starting with 17bc5e9f65788ca5d0d2fd023b4c73399b871bd5f0998d289574078f8664369c not found: ID does not exist" containerID="17bc5e9f65788ca5d0d2fd023b4c73399b871bd5f0998d289574078f8664369c" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.203970 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17bc5e9f65788ca5d0d2fd023b4c73399b871bd5f0998d289574078f8664369c"} err="failed to get container status \"17bc5e9f65788ca5d0d2fd023b4c73399b871bd5f0998d289574078f8664369c\": rpc error: code = NotFound desc = could not find container \"17bc5e9f65788ca5d0d2fd023b4c73399b871bd5f0998d289574078f8664369c\": container with ID starting with 17bc5e9f65788ca5d0d2fd023b4c73399b871bd5f0998d289574078f8664369c not found: ID does not exist" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.294017 4756 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4dba74e4-794c-4822-bc88-e9edbf99dc06-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.402532 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7bmnl"] Dec 03 11:16:59 crc kubenswrapper[4756]: E1203 11:16:59.403905 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerName="horizon-log" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.403934 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerName="horizon-log" Dec 03 11:16:59 crc kubenswrapper[4756]: E1203 11:16:59.404001 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerName="horizon" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.404011 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerName="horizon" Dec 03 11:16:59 crc kubenswrapper[4756]: E1203 11:16:59.404026 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3582e52f-4806-401b-a822-6a98777de800" containerName="dnsmasq-dns" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.404041 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3582e52f-4806-401b-a822-6a98777de800" containerName="dnsmasq-dns" Dec 03 11:16:59 crc kubenswrapper[4756]: E1203 11:16:59.404070 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ca2fd3-246d-46d0-bfd3-17aca288ab94" containerName="nova-scheduler-scheduler" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.404080 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ca2fd3-246d-46d0-bfd3-17aca288ab94" containerName="nova-scheduler-scheduler" Dec 03 11:16:59 crc kubenswrapper[4756]: E1203 11:16:59.404121 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dba74e4-794c-4822-bc88-e9edbf99dc06" containerName="nova-metadata-log" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.404133 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dba74e4-794c-4822-bc88-e9edbf99dc06" containerName="nova-metadata-log" Dec 03 11:16:59 crc kubenswrapper[4756]: E1203 11:16:59.404152 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dba74e4-794c-4822-bc88-e9edbf99dc06" containerName="nova-metadata-metadata" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.404161 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dba74e4-794c-4822-bc88-e9edbf99dc06" containerName="nova-metadata-metadata" Dec 03 11:16:59 crc kubenswrapper[4756]: E1203 11:16:59.404196 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3582e52f-4806-401b-a822-6a98777de800" containerName="init" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.404204 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3582e52f-4806-401b-a822-6a98777de800" containerName="init" Dec 03 11:16:59 crc kubenswrapper[4756]: E1203 11:16:59.404230 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2c45dd-c649-4e7f-bdcb-259bbc663d8c" containerName="nova-manage" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.404238 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2c45dd-c649-4e7f-bdcb-259bbc663d8c" containerName="nova-manage" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.404792 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dba74e4-794c-4822-bc88-e9edbf99dc06" containerName="nova-metadata-log" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.404833 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerName="horizon" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.404860 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dba74e4-794c-4822-bc88-e9edbf99dc06" containerName="nova-metadata-metadata" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.404880 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="de2c45dd-c649-4e7f-bdcb-259bbc663d8c" containerName="nova-manage" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.404897 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ca2fd3-246d-46d0-bfd3-17aca288ab94" containerName="nova-scheduler-scheduler" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.404926 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3582e52f-4806-401b-a822-6a98777de800" containerName="dnsmasq-dns" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.404969 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6aa931-1a56-41c7-85b7-3b4b0bb07dc3" containerName="horizon-log" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.416592 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.458268 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7bmnl"] Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.498235 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.510984 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.540837 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.544171 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.552615 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.552782 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.601339 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.603747 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzsn4\" (UniqueName: \"kubernetes.io/projected/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-kube-api-access-qzsn4\") pod \"redhat-operators-7bmnl\" (UID: \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\") " pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.604012 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-utilities\") pod \"redhat-operators-7bmnl\" (UID: \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\") " pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.604185 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-catalog-content\") pod \"redhat-operators-7bmnl\" (UID: \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\") " pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.631046 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.647421 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.668276 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.673847 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.679875 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.685176 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.707945 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-catalog-content\") pod \"redhat-operators-7bmnl\" (UID: \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\") " pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.708047 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzsn4\" (UniqueName: \"kubernetes.io/projected/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-kube-api-access-qzsn4\") pod \"redhat-operators-7bmnl\" (UID: \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\") " pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.708096 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.708161 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.708192 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-config-data\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.708242 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-utilities\") pod \"redhat-operators-7bmnl\" (UID: \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\") " pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.708328 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdmvw\" (UniqueName: \"kubernetes.io/projected/ec1943ca-3ada-4b39-884d-a822f2efba3f-kube-api-access-wdmvw\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.708352 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1943ca-3ada-4b39-884d-a822f2efba3f-logs\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.708611 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-catalog-content\") pod \"redhat-operators-7bmnl\" (UID: \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\") " pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.709021 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-utilities\") pod \"redhat-operators-7bmnl\" (UID: \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\") " pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.729500 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzsn4\" (UniqueName: \"kubernetes.io/projected/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-kube-api-access-qzsn4\") pod \"redhat-operators-7bmnl\" (UID: \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\") " pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.790398 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.811075 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdmvw\" (UniqueName: \"kubernetes.io/projected/ec1943ca-3ada-4b39-884d-a822f2efba3f-kube-api-access-wdmvw\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.811163 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1943ca-3ada-4b39-884d-a822f2efba3f-logs\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.811234 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887bec1e-c8a0-490f-a945-d6d8885c0480-config-data\") pod \"nova-scheduler-0\" (UID: \"887bec1e-c8a0-490f-a945-d6d8885c0480\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.811259 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tjmm\" (UniqueName: \"kubernetes.io/projected/887bec1e-c8a0-490f-a945-d6d8885c0480-kube-api-access-9tjmm\") pod \"nova-scheduler-0\" (UID: \"887bec1e-c8a0-490f-a945-d6d8885c0480\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.811306 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887bec1e-c8a0-490f-a945-d6d8885c0480-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"887bec1e-c8a0-490f-a945-d6d8885c0480\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.811349 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.811405 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.811430 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-config-data\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.812303 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1943ca-3ada-4b39-884d-a822f2efba3f-logs\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.818239 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.820368 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.820815 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-config-data\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.836997 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdmvw\" (UniqueName: \"kubernetes.io/projected/ec1943ca-3ada-4b39-884d-a822f2efba3f-kube-api-access-wdmvw\") pod \"nova-metadata-0\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.880637 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.913360 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887bec1e-c8a0-490f-a945-d6d8885c0480-config-data\") pod \"nova-scheduler-0\" (UID: \"887bec1e-c8a0-490f-a945-d6d8885c0480\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.913435 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tjmm\" (UniqueName: \"kubernetes.io/projected/887bec1e-c8a0-490f-a945-d6d8885c0480-kube-api-access-9tjmm\") pod \"nova-scheduler-0\" (UID: \"887bec1e-c8a0-490f-a945-d6d8885c0480\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.913509 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887bec1e-c8a0-490f-a945-d6d8885c0480-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"887bec1e-c8a0-490f-a945-d6d8885c0480\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.936058 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887bec1e-c8a0-490f-a945-d6d8885c0480-config-data\") pod \"nova-scheduler-0\" (UID: \"887bec1e-c8a0-490f-a945-d6d8885c0480\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.956693 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tjmm\" (UniqueName: \"kubernetes.io/projected/887bec1e-c8a0-490f-a945-d6d8885c0480-kube-api-access-9tjmm\") pod \"nova-scheduler-0\" (UID: \"887bec1e-c8a0-490f-a945-d6d8885c0480\") " pod="openstack/nova-scheduler-0" Dec 03 11:16:59 crc kubenswrapper[4756]: I1203 11:16:59.958093 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887bec1e-c8a0-490f-a945-d6d8885c0480-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"887bec1e-c8a0-490f-a945-d6d8885c0480\") " pod="openstack/nova-scheduler-0" Dec 03 11:17:00 crc kubenswrapper[4756]: I1203 11:17:00.009677 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 11:17:00 crc kubenswrapper[4756]: I1203 11:17:00.379304 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7bmnl"] Dec 03 11:17:00 crc kubenswrapper[4756]: I1203 11:17:00.418805 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:17:00 crc kubenswrapper[4756]: W1203 11:17:00.723235 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod887bec1e_c8a0_490f_a945_d6d8885c0480.slice/crio-0f217f423454d2c788e08134c641a2ee8eaac9dc7ec9d6dd930db360fc69d2c8 WatchSource:0}: Error finding container 0f217f423454d2c788e08134c641a2ee8eaac9dc7ec9d6dd930db360fc69d2c8: Status 404 returned error can't find the container with id 0f217f423454d2c788e08134c641a2ee8eaac9dc7ec9d6dd930db360fc69d2c8 Dec 03 11:17:00 crc kubenswrapper[4756]: I1203 11:17:00.724101 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.057025 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.144578 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f59def-c3cb-4d35-b553-9566c9bc0b53-logs\") pod \"14f59def-c3cb-4d35-b553-9566c9bc0b53\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.145563 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f59def-c3cb-4d35-b553-9566c9bc0b53-config-data\") pod \"14f59def-c3cb-4d35-b553-9566c9bc0b53\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.145618 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhbbp\" (UniqueName: \"kubernetes.io/projected/14f59def-c3cb-4d35-b553-9566c9bc0b53-kube-api-access-fhbbp\") pod \"14f59def-c3cb-4d35-b553-9566c9bc0b53\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.145681 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f59def-c3cb-4d35-b553-9566c9bc0b53-combined-ca-bundle\") pod \"14f59def-c3cb-4d35-b553-9566c9bc0b53\" (UID: \"14f59def-c3cb-4d35-b553-9566c9bc0b53\") " Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.146290 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14f59def-c3cb-4d35-b553-9566c9bc0b53-logs" (OuterVolumeSpecName: "logs") pod "14f59def-c3cb-4d35-b553-9566c9bc0b53" (UID: "14f59def-c3cb-4d35-b553-9566c9bc0b53"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.154317 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f59def-c3cb-4d35-b553-9566c9bc0b53-kube-api-access-fhbbp" (OuterVolumeSpecName: "kube-api-access-fhbbp") pod "14f59def-c3cb-4d35-b553-9566c9bc0b53" (UID: "14f59def-c3cb-4d35-b553-9566c9bc0b53"). InnerVolumeSpecName "kube-api-access-fhbbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.167345 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1943ca-3ada-4b39-884d-a822f2efba3f","Type":"ContainerStarted","Data":"33d6806e90bb90a6348aa77bab5c84df53d837194e01113411db5fa259fa52ae"} Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.167412 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1943ca-3ada-4b39-884d-a822f2efba3f","Type":"ContainerStarted","Data":"4088dadbd6f6e931885ef90b15360eb66d31b0e1475086befffd7df9bab9dfee"} Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.182361 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"887bec1e-c8a0-490f-a945-d6d8885c0480","Type":"ContainerStarted","Data":"aa4b73304bf3783fd8db958d2b0d94de07ed9d28d4777bb474368408ad695aca"} Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.182442 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"887bec1e-c8a0-490f-a945-d6d8885c0480","Type":"ContainerStarted","Data":"0f217f423454d2c788e08134c641a2ee8eaac9dc7ec9d6dd930db360fc69d2c8"} Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.190943 4756 generic.go:334] "Generic (PLEG): container finished" podID="7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" containerID="01321217fdb85b134bb1a4d0db7d5d10862730fd41ccf8f474c69d3fc97099df" exitCode=0 Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.191082 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bmnl" event={"ID":"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a","Type":"ContainerDied","Data":"01321217fdb85b134bb1a4d0db7d5d10862730fd41ccf8f474c69d3fc97099df"} Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.191117 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bmnl" event={"ID":"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a","Type":"ContainerStarted","Data":"99e17c731d9b36878fb1e89ba09dcebf3cdc966ab081fe9d9e809be94d0df39d"} Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.206290 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f59def-c3cb-4d35-b553-9566c9bc0b53-config-data" (OuterVolumeSpecName: "config-data") pod "14f59def-c3cb-4d35-b553-9566c9bc0b53" (UID: "14f59def-c3cb-4d35-b553-9566c9bc0b53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.210125 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f59def-c3cb-4d35-b553-9566c9bc0b53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14f59def-c3cb-4d35-b553-9566c9bc0b53" (UID: "14f59def-c3cb-4d35-b553-9566c9bc0b53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.221753 4756 generic.go:334] "Generic (PLEG): container finished" podID="14f59def-c3cb-4d35-b553-9566c9bc0b53" containerID="fa10502fb2f956d38dd7a5de2c8089414046d4a8036143f0dd920cd048fa3519" exitCode=0 Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.221823 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14f59def-c3cb-4d35-b553-9566c9bc0b53","Type":"ContainerDied","Data":"fa10502fb2f956d38dd7a5de2c8089414046d4a8036143f0dd920cd048fa3519"} Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.221863 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14f59def-c3cb-4d35-b553-9566c9bc0b53","Type":"ContainerDied","Data":"6f09bffcd0f66cbf72843e193af3d8e807023cc6cc140c7379f2b71c344cc263"} Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.221884 4756 scope.go:117] "RemoveContainer" containerID="fa10502fb2f956d38dd7a5de2c8089414046d4a8036143f0dd920cd048fa3519" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.222124 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.250728 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f59def-c3cb-4d35-b553-9566c9bc0b53-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.250788 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhbbp\" (UniqueName: \"kubernetes.io/projected/14f59def-c3cb-4d35-b553-9566c9bc0b53-kube-api-access-fhbbp\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.250802 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f59def-c3cb-4d35-b553-9566c9bc0b53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.250815 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14f59def-c3cb-4d35-b553-9566c9bc0b53-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.259419 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.259392553 podStartE2EDuration="2.259392553s" podCreationTimestamp="2025-12-03 11:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:17:01.224473766 +0000 UTC m=+1432.254475010" watchObservedRunningTime="2025-12-03 11:17:01.259392553 +0000 UTC m=+1432.289393797" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.263370 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dba74e4-794c-4822-bc88-e9edbf99dc06" path="/var/lib/kubelet/pods/4dba74e4-794c-4822-bc88-e9edbf99dc06/volumes" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.264204 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0ca2fd3-246d-46d0-bfd3-17aca288ab94" path="/var/lib/kubelet/pods/b0ca2fd3-246d-46d0-bfd3-17aca288ab94/volumes" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.382918 4756 scope.go:117] "RemoveContainer" containerID="22c6f9b52daf3dd7fd1851f902268e81e552dde8993d5d4d8ae0c68119e6370d" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.415759 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.456918 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.476529 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 11:17:01 crc kubenswrapper[4756]: E1203 11:17:01.477610 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f59def-c3cb-4d35-b553-9566c9bc0b53" containerName="nova-api-log" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.477636 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f59def-c3cb-4d35-b553-9566c9bc0b53" containerName="nova-api-log" Dec 03 11:17:01 crc kubenswrapper[4756]: E1203 11:17:01.477658 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f59def-c3cb-4d35-b553-9566c9bc0b53" containerName="nova-api-api" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.477668 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f59def-c3cb-4d35-b553-9566c9bc0b53" containerName="nova-api-api" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.478168 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f59def-c3cb-4d35-b553-9566c9bc0b53" containerName="nova-api-log" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.478220 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f59def-c3cb-4d35-b553-9566c9bc0b53" containerName="nova-api-api" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.485760 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.496900 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.515647 4756 scope.go:117] "RemoveContainer" containerID="fa10502fb2f956d38dd7a5de2c8089414046d4a8036143f0dd920cd048fa3519" Dec 03 11:17:01 crc kubenswrapper[4756]: E1203 11:17:01.517309 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa10502fb2f956d38dd7a5de2c8089414046d4a8036143f0dd920cd048fa3519\": container with ID starting with fa10502fb2f956d38dd7a5de2c8089414046d4a8036143f0dd920cd048fa3519 not found: ID does not exist" containerID="fa10502fb2f956d38dd7a5de2c8089414046d4a8036143f0dd920cd048fa3519" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.517437 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa10502fb2f956d38dd7a5de2c8089414046d4a8036143f0dd920cd048fa3519"} err="failed to get container status \"fa10502fb2f956d38dd7a5de2c8089414046d4a8036143f0dd920cd048fa3519\": rpc error: code = NotFound desc = could not find container \"fa10502fb2f956d38dd7a5de2c8089414046d4a8036143f0dd920cd048fa3519\": container with ID starting with fa10502fb2f956d38dd7a5de2c8089414046d4a8036143f0dd920cd048fa3519 not found: ID does not exist" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.517501 4756 scope.go:117] "RemoveContainer" containerID="22c6f9b52daf3dd7fd1851f902268e81e552dde8993d5d4d8ae0c68119e6370d" Dec 03 11:17:01 crc kubenswrapper[4756]: E1203 11:17:01.525396 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22c6f9b52daf3dd7fd1851f902268e81e552dde8993d5d4d8ae0c68119e6370d\": container with ID starting with 22c6f9b52daf3dd7fd1851f902268e81e552dde8993d5d4d8ae0c68119e6370d not found: ID does not exist" containerID="22c6f9b52daf3dd7fd1851f902268e81e552dde8993d5d4d8ae0c68119e6370d" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.525464 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c6f9b52daf3dd7fd1851f902268e81e552dde8993d5d4d8ae0c68119e6370d"} err="failed to get container status \"22c6f9b52daf3dd7fd1851f902268e81e552dde8993d5d4d8ae0c68119e6370d\": rpc error: code = NotFound desc = could not find container \"22c6f9b52daf3dd7fd1851f902268e81e552dde8993d5d4d8ae0c68119e6370d\": container with ID starting with 22c6f9b52daf3dd7fd1851f902268e81e552dde8993d5d4d8ae0c68119e6370d not found: ID does not exist" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.533368 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.584979 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-logs\") pod \"nova-api-0\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.585059 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-config-data\") pod \"nova-api-0\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.585137 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gs9b\" (UniqueName: \"kubernetes.io/projected/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-kube-api-access-2gs9b\") pod \"nova-api-0\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.585194 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.686903 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.687143 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-logs\") pod \"nova-api-0\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.687181 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-config-data\") pod \"nova-api-0\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.687244 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gs9b\" (UniqueName: \"kubernetes.io/projected/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-kube-api-access-2gs9b\") pod \"nova-api-0\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.687683 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-logs\") pod \"nova-api-0\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.694977 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.707621 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-config-data\") pod \"nova-api-0\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.714221 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gs9b\" (UniqueName: \"kubernetes.io/projected/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-kube-api-access-2gs9b\") pod \"nova-api-0\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " pod="openstack/nova-api-0" Dec 03 11:17:01 crc kubenswrapper[4756]: I1203 11:17:01.828822 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 11:17:02 crc kubenswrapper[4756]: I1203 11:17:02.249053 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1943ca-3ada-4b39-884d-a822f2efba3f","Type":"ContainerStarted","Data":"50b4c0081b3024d29df37184780ad0737ca0a892d736a9db2456101831563a11"} Dec 03 11:17:02 crc kubenswrapper[4756]: I1203 11:17:02.306967 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.306909586 podStartE2EDuration="3.306909586s" podCreationTimestamp="2025-12-03 11:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:17:02.280431952 +0000 UTC m=+1433.310433206" watchObservedRunningTime="2025-12-03 11:17:02.306909586 +0000 UTC m=+1433.336910840" Dec 03 11:17:02 crc kubenswrapper[4756]: I1203 11:17:02.449546 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:17:02 crc kubenswrapper[4756]: W1203 11:17:02.452891 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d8d1bfe_24ca_4cde_9038_a3a01ebd807e.slice/crio-6699e165be5fed2f3e67ce143905a275e147ca8e3c7a7a694a2ac1c4f7a8f316 WatchSource:0}: Error finding container 6699e165be5fed2f3e67ce143905a275e147ca8e3c7a7a694a2ac1c4f7a8f316: Status 404 returned error can't find the container with id 6699e165be5fed2f3e67ce143905a275e147ca8e3c7a7a694a2ac1c4f7a8f316 Dec 03 11:17:03 crc kubenswrapper[4756]: I1203 11:17:03.256032 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14f59def-c3cb-4d35-b553-9566c9bc0b53" path="/var/lib/kubelet/pods/14f59def-c3cb-4d35-b553-9566c9bc0b53/volumes" Dec 03 11:17:03 crc kubenswrapper[4756]: I1203 11:17:03.265181 4756 generic.go:334] "Generic (PLEG): container finished" podID="0efda8e8-882b-44b9-9bcd-479286328ec1" containerID="392b2c1a8792bc2b3afd4026e934079ec84f38df9f74894e9bdec53e8c2734ab" exitCode=0 Dec 03 11:17:03 crc kubenswrapper[4756]: I1203 11:17:03.265276 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x7t27" event={"ID":"0efda8e8-882b-44b9-9bcd-479286328ec1","Type":"ContainerDied","Data":"392b2c1a8792bc2b3afd4026e934079ec84f38df9f74894e9bdec53e8c2734ab"} Dec 03 11:17:03 crc kubenswrapper[4756]: I1203 11:17:03.267496 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e","Type":"ContainerStarted","Data":"a1e1fdb5f105a9535bb74f6c018d41d8a431ff7a555f0f6c2c45f6de3b567063"} Dec 03 11:17:03 crc kubenswrapper[4756]: I1203 11:17:03.267530 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e","Type":"ContainerStarted","Data":"6699e165be5fed2f3e67ce143905a275e147ca8e3c7a7a694a2ac1c4f7a8f316"} Dec 03 11:17:03 crc kubenswrapper[4756]: I1203 11:17:03.270988 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bmnl" event={"ID":"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a","Type":"ContainerStarted","Data":"35f0a4c8b694b0e8faf556b339877dc17700a41b4996e0271f76b9014a1e47de"} Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.100035 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.101420 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9772219e-2495-4892-8977-52360ac83b0a" containerName="kube-state-metrics" containerID="cri-o://bd3c1ac04dd6624fb6c52536a17f267f8249f437eca0c3b30306ce66a708cc99" gracePeriod=30 Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.301943 4756 generic.go:334] "Generic (PLEG): container finished" podID="9772219e-2495-4892-8977-52360ac83b0a" containerID="bd3c1ac04dd6624fb6c52536a17f267f8249f437eca0c3b30306ce66a708cc99" exitCode=2 Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.302046 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9772219e-2495-4892-8977-52360ac83b0a","Type":"ContainerDied","Data":"bd3c1ac04dd6624fb6c52536a17f267f8249f437eca0c3b30306ce66a708cc99"} Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.322240 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e","Type":"ContainerStarted","Data":"f98f9f84b4f861a030abb4975fbb43f6da91d5513978168a1ab7407b92fcad01"} Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.386692 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.386665754 podStartE2EDuration="3.386665754s" podCreationTimestamp="2025-12-03 11:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:17:04.386587622 +0000 UTC m=+1435.416588876" watchObservedRunningTime="2025-12-03 11:17:04.386665754 +0000 UTC m=+1435.416666998" Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.881419 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.882059 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.911084 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.920887 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.980828 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-config-data\") pod \"0efda8e8-882b-44b9-9bcd-479286328ec1\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.981170 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-combined-ca-bundle\") pod \"0efda8e8-882b-44b9-9bcd-479286328ec1\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.981227 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fvdt\" (UniqueName: \"kubernetes.io/projected/9772219e-2495-4892-8977-52360ac83b0a-kube-api-access-6fvdt\") pod \"9772219e-2495-4892-8977-52360ac83b0a\" (UID: \"9772219e-2495-4892-8977-52360ac83b0a\") " Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.981295 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xx5n\" (UniqueName: \"kubernetes.io/projected/0efda8e8-882b-44b9-9bcd-479286328ec1-kube-api-access-6xx5n\") pod \"0efda8e8-882b-44b9-9bcd-479286328ec1\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.981381 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-scripts\") pod \"0efda8e8-882b-44b9-9bcd-479286328ec1\" (UID: \"0efda8e8-882b-44b9-9bcd-479286328ec1\") " Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.996321 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-scripts" (OuterVolumeSpecName: "scripts") pod "0efda8e8-882b-44b9-9bcd-479286328ec1" (UID: "0efda8e8-882b-44b9-9bcd-479286328ec1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:04 crc kubenswrapper[4756]: I1203 11:17:04.996350 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9772219e-2495-4892-8977-52360ac83b0a-kube-api-access-6fvdt" (OuterVolumeSpecName: "kube-api-access-6fvdt") pod "9772219e-2495-4892-8977-52360ac83b0a" (UID: "9772219e-2495-4892-8977-52360ac83b0a"). InnerVolumeSpecName "kube-api-access-6fvdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.010919 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.024326 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0efda8e8-882b-44b9-9bcd-479286328ec1-kube-api-access-6xx5n" (OuterVolumeSpecName: "kube-api-access-6xx5n") pod "0efda8e8-882b-44b9-9bcd-479286328ec1" (UID: "0efda8e8-882b-44b9-9bcd-479286328ec1"). InnerVolumeSpecName "kube-api-access-6xx5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.074200 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-config-data" (OuterVolumeSpecName: "config-data") pod "0efda8e8-882b-44b9-9bcd-479286328ec1" (UID: "0efda8e8-882b-44b9-9bcd-479286328ec1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.076261 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0efda8e8-882b-44b9-9bcd-479286328ec1" (UID: "0efda8e8-882b-44b9-9bcd-479286328ec1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.084701 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.084760 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fvdt\" (UniqueName: \"kubernetes.io/projected/9772219e-2495-4892-8977-52360ac83b0a-kube-api-access-6fvdt\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.084779 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xx5n\" (UniqueName: \"kubernetes.io/projected/0efda8e8-882b-44b9-9bcd-479286328ec1-kube-api-access-6xx5n\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.084791 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.084804 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0efda8e8-882b-44b9-9bcd-479286328ec1-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.342064 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x7t27" event={"ID":"0efda8e8-882b-44b9-9bcd-479286328ec1","Type":"ContainerDied","Data":"6e3b02cc192dd70b3ae05668e8280c698a05777393220b2175b234600b5ab403"} Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.342121 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e3b02cc192dd70b3ae05668e8280c698a05777393220b2175b234600b5ab403" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.342203 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x7t27" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.348637 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9772219e-2495-4892-8977-52360ac83b0a","Type":"ContainerDied","Data":"bf8843eedbe597be2b735833c257f2c2ffc41e8b301ebfd5de12e9eca2712087"} Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.348658 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.348710 4756 scope.go:117] "RemoveContainer" containerID="bd3c1ac04dd6624fb6c52536a17f267f8249f437eca0c3b30306ce66a708cc99" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.356687 4756 generic.go:334] "Generic (PLEG): container finished" podID="7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" containerID="35f0a4c8b694b0e8faf556b339877dc17700a41b4996e0271f76b9014a1e47de" exitCode=0 Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.356746 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bmnl" event={"ID":"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a","Type":"ContainerDied","Data":"35f0a4c8b694b0e8faf556b339877dc17700a41b4996e0271f76b9014a1e47de"} Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.395311 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.414041 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.437136 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 11:17:05 crc kubenswrapper[4756]: E1203 11:17:05.437606 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0efda8e8-882b-44b9-9bcd-479286328ec1" containerName="nova-cell1-conductor-db-sync" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.437627 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0efda8e8-882b-44b9-9bcd-479286328ec1" containerName="nova-cell1-conductor-db-sync" Dec 03 11:17:05 crc kubenswrapper[4756]: E1203 11:17:05.437648 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9772219e-2495-4892-8977-52360ac83b0a" containerName="kube-state-metrics" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.437654 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="9772219e-2495-4892-8977-52360ac83b0a" containerName="kube-state-metrics" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.437881 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="9772219e-2495-4892-8977-52360ac83b0a" containerName="kube-state-metrics" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.437910 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0efda8e8-882b-44b9-9bcd-479286328ec1" containerName="nova-cell1-conductor-db-sync" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.438734 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.443301 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.449670 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.461363 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.463361 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.466927 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.466941 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.504701 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7945f798-6dd7-4887-9cb3-72852427cf8e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7945f798-6dd7-4887-9cb3-72852427cf8e\") " pod="openstack/nova-cell1-conductor-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.505294 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwjpf\" (UniqueName: \"kubernetes.io/projected/7945f798-6dd7-4887-9cb3-72852427cf8e-kube-api-access-kwjpf\") pod \"nova-cell1-conductor-0\" (UID: \"7945f798-6dd7-4887-9cb3-72852427cf8e\") " pod="openstack/nova-cell1-conductor-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.505329 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7945f798-6dd7-4887-9cb3-72852427cf8e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7945f798-6dd7-4887-9cb3-72852427cf8e\") " pod="openstack/nova-cell1-conductor-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.507060 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.610077 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7945f798-6dd7-4887-9cb3-72852427cf8e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7945f798-6dd7-4887-9cb3-72852427cf8e\") " pod="openstack/nova-cell1-conductor-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.610199 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a21d10-a957-4ce1-b804-b75db51fe53c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f5a21d10-a957-4ce1-b804-b75db51fe53c\") " pod="openstack/kube-state-metrics-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.610227 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f5a21d10-a957-4ce1-b804-b75db51fe53c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f5a21d10-a957-4ce1-b804-b75db51fe53c\") " pod="openstack/kube-state-metrics-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.610256 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwjpf\" (UniqueName: \"kubernetes.io/projected/7945f798-6dd7-4887-9cb3-72852427cf8e-kube-api-access-kwjpf\") pod \"nova-cell1-conductor-0\" (UID: \"7945f798-6dd7-4887-9cb3-72852427cf8e\") " pod="openstack/nova-cell1-conductor-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.610276 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7945f798-6dd7-4887-9cb3-72852427cf8e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7945f798-6dd7-4887-9cb3-72852427cf8e\") " pod="openstack/nova-cell1-conductor-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.610301 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m48g\" (UniqueName: \"kubernetes.io/projected/f5a21d10-a957-4ce1-b804-b75db51fe53c-kube-api-access-8m48g\") pod \"kube-state-metrics-0\" (UID: \"f5a21d10-a957-4ce1-b804-b75db51fe53c\") " pod="openstack/kube-state-metrics-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.610507 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5a21d10-a957-4ce1-b804-b75db51fe53c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f5a21d10-a957-4ce1-b804-b75db51fe53c\") " pod="openstack/kube-state-metrics-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.618661 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7945f798-6dd7-4887-9cb3-72852427cf8e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7945f798-6dd7-4887-9cb3-72852427cf8e\") " pod="openstack/nova-cell1-conductor-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.620943 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7945f798-6dd7-4887-9cb3-72852427cf8e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7945f798-6dd7-4887-9cb3-72852427cf8e\") " pod="openstack/nova-cell1-conductor-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.637100 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwjpf\" (UniqueName: \"kubernetes.io/projected/7945f798-6dd7-4887-9cb3-72852427cf8e-kube-api-access-kwjpf\") pod \"nova-cell1-conductor-0\" (UID: \"7945f798-6dd7-4887-9cb3-72852427cf8e\") " pod="openstack/nova-cell1-conductor-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.714897 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a21d10-a957-4ce1-b804-b75db51fe53c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f5a21d10-a957-4ce1-b804-b75db51fe53c\") " pod="openstack/kube-state-metrics-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.714999 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f5a21d10-a957-4ce1-b804-b75db51fe53c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f5a21d10-a957-4ce1-b804-b75db51fe53c\") " pod="openstack/kube-state-metrics-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.715060 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m48g\" (UniqueName: \"kubernetes.io/projected/f5a21d10-a957-4ce1-b804-b75db51fe53c-kube-api-access-8m48g\") pod \"kube-state-metrics-0\" (UID: \"f5a21d10-a957-4ce1-b804-b75db51fe53c\") " pod="openstack/kube-state-metrics-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.715136 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5a21d10-a957-4ce1-b804-b75db51fe53c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f5a21d10-a957-4ce1-b804-b75db51fe53c\") " pod="openstack/kube-state-metrics-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.722364 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f5a21d10-a957-4ce1-b804-b75db51fe53c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f5a21d10-a957-4ce1-b804-b75db51fe53c\") " pod="openstack/kube-state-metrics-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.723671 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a21d10-a957-4ce1-b804-b75db51fe53c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f5a21d10-a957-4ce1-b804-b75db51fe53c\") " pod="openstack/kube-state-metrics-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.734651 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5a21d10-a957-4ce1-b804-b75db51fe53c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f5a21d10-a957-4ce1-b804-b75db51fe53c\") " pod="openstack/kube-state-metrics-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.752804 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m48g\" (UniqueName: \"kubernetes.io/projected/f5a21d10-a957-4ce1-b804-b75db51fe53c-kube-api-access-8m48g\") pod \"kube-state-metrics-0\" (UID: \"f5a21d10-a957-4ce1-b804-b75db51fe53c\") " pod="openstack/kube-state-metrics-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.784376 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 03 11:17:05 crc kubenswrapper[4756]: I1203 11:17:05.803466 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 11:17:06 crc kubenswrapper[4756]: I1203 11:17:06.736930 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 11:17:06 crc kubenswrapper[4756]: W1203 11:17:06.744322 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5a21d10_a957_4ce1_b804_b75db51fe53c.slice/crio-0de4a7404169767d25dda13444afe0e9dace430bb7c72bca4e38265f7926a0e6 WatchSource:0}: Error finding container 0de4a7404169767d25dda13444afe0e9dace430bb7c72bca4e38265f7926a0e6: Status 404 returned error can't find the container with id 0de4a7404169767d25dda13444afe0e9dace430bb7c72bca4e38265f7926a0e6 Dec 03 11:17:07 crc kubenswrapper[4756]: W1203 11:17:07.023416 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7945f798_6dd7_4887_9cb3_72852427cf8e.slice/crio-a7712931e5fec93210b6cbfd4b3b9b1ad54a6fbbc22f7ccb0719160e121dc285 WatchSource:0}: Error finding container a7712931e5fec93210b6cbfd4b3b9b1ad54a6fbbc22f7ccb0719160e121dc285: Status 404 returned error can't find the container with id a7712931e5fec93210b6cbfd4b3b9b1ad54a6fbbc22f7ccb0719160e121dc285 Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.038284 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.248918 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9772219e-2495-4892-8977-52360ac83b0a" path="/var/lib/kubelet/pods/9772219e-2495-4892-8977-52360ac83b0a/volumes" Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.411670 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7945f798-6dd7-4887-9cb3-72852427cf8e","Type":"ContainerStarted","Data":"774885c6bd773c0f9c165cdbd0c56e6567ace42b3db2708881b51e6287c2ae44"} Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.411747 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7945f798-6dd7-4887-9cb3-72852427cf8e","Type":"ContainerStarted","Data":"a7712931e5fec93210b6cbfd4b3b9b1ad54a6fbbc22f7ccb0719160e121dc285"} Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.411843 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.414625 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f5a21d10-a957-4ce1-b804-b75db51fe53c","Type":"ContainerStarted","Data":"0de4a7404169767d25dda13444afe0e9dace430bb7c72bca4e38265f7926a0e6"} Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.415668 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.419843 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bmnl" event={"ID":"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a","Type":"ContainerStarted","Data":"0466aa7d2153575da9355de2dad8009e1ec9121e36fd72cd300cfa2437988063"} Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.443252 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.443222702 podStartE2EDuration="2.443222702s" podCreationTimestamp="2025-12-03 11:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:17:07.4299838 +0000 UTC m=+1438.459985054" watchObservedRunningTime="2025-12-03 11:17:07.443222702 +0000 UTC m=+1438.473223946" Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.457582 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.055055874 podStartE2EDuration="2.457548419s" podCreationTimestamp="2025-12-03 11:17:05 +0000 UTC" firstStartedPulling="2025-12-03 11:17:06.766486957 +0000 UTC m=+1437.796488211" lastFinishedPulling="2025-12-03 11:17:07.168979512 +0000 UTC m=+1438.198980756" observedRunningTime="2025-12-03 11:17:07.452691318 +0000 UTC m=+1438.482692562" watchObservedRunningTime="2025-12-03 11:17:07.457548419 +0000 UTC m=+1438.487549663" Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.486809 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7bmnl" podStartSLOduration=2.897603839 podStartE2EDuration="8.486775549s" podCreationTimestamp="2025-12-03 11:16:59 +0000 UTC" firstStartedPulling="2025-12-03 11:17:01.195629328 +0000 UTC m=+1432.225630572" lastFinishedPulling="2025-12-03 11:17:06.784801038 +0000 UTC m=+1437.814802282" observedRunningTime="2025-12-03 11:17:07.474970852 +0000 UTC m=+1438.504972096" watchObservedRunningTime="2025-12-03 11:17:07.486775549 +0000 UTC m=+1438.516776803" Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.561625 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.561979 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="ceilometer-central-agent" containerID="cri-o://0b4da00108b3a7d9599756253f75f90d7e77bba4e627b71d3b042aa00cac3ced" gracePeriod=30 Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.562052 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="proxy-httpd" containerID="cri-o://69d03621bf33942744d907ee6cbf84126f088eb600eded8e8a97b70fa8a556f0" gracePeriod=30 Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.562130 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="sg-core" containerID="cri-o://4b83b11e9c464d963e1ad22c6a87f24141f861a10139d437bcf71b5494486356" gracePeriod=30 Dec 03 11:17:07 crc kubenswrapper[4756]: I1203 11:17:07.562416 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="ceilometer-notification-agent" containerID="cri-o://18860787d480be681c30370fd83acbed4e19b291dc8ed776036be4e8e85eb1ad" gracePeriod=30 Dec 03 11:17:08 crc kubenswrapper[4756]: I1203 11:17:08.514065 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerID="69d03621bf33942744d907ee6cbf84126f088eb600eded8e8a97b70fa8a556f0" exitCode=0 Dec 03 11:17:08 crc kubenswrapper[4756]: I1203 11:17:08.514523 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerID="4b83b11e9c464d963e1ad22c6a87f24141f861a10139d437bcf71b5494486356" exitCode=2 Dec 03 11:17:08 crc kubenswrapper[4756]: I1203 11:17:08.514534 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerID="0b4da00108b3a7d9599756253f75f90d7e77bba4e627b71d3b042aa00cac3ced" exitCode=0 Dec 03 11:17:08 crc kubenswrapper[4756]: I1203 11:17:08.514413 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0e483b8-8712-403c-bac4-f5893d35fc6f","Type":"ContainerDied","Data":"69d03621bf33942744d907ee6cbf84126f088eb600eded8e8a97b70fa8a556f0"} Dec 03 11:17:08 crc kubenswrapper[4756]: I1203 11:17:08.514617 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0e483b8-8712-403c-bac4-f5893d35fc6f","Type":"ContainerDied","Data":"4b83b11e9c464d963e1ad22c6a87f24141f861a10139d437bcf71b5494486356"} Dec 03 11:17:08 crc kubenswrapper[4756]: I1203 11:17:08.514638 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0e483b8-8712-403c-bac4-f5893d35fc6f","Type":"ContainerDied","Data":"0b4da00108b3a7d9599756253f75f90d7e77bba4e627b71d3b042aa00cac3ced"} Dec 03 11:17:08 crc kubenswrapper[4756]: I1203 11:17:08.519466 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f5a21d10-a957-4ce1-b804-b75db51fe53c","Type":"ContainerStarted","Data":"ffc9beb6a31d424e97b83dbda9067102da83d588c3f1850ae0162a56f7d9cc24"} Dec 03 11:17:09 crc kubenswrapper[4756]: I1203 11:17:09.791787 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:17:09 crc kubenswrapper[4756]: I1203 11:17:09.792516 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:17:09 crc kubenswrapper[4756]: I1203 11:17:09.881135 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 11:17:09 crc kubenswrapper[4756]: I1203 11:17:09.881197 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 11:17:10 crc kubenswrapper[4756]: I1203 11:17:10.009886 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 11:17:10 crc kubenswrapper[4756]: I1203 11:17:10.054729 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 11:17:10 crc kubenswrapper[4756]: I1203 11:17:10.575767 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 11:17:10 crc kubenswrapper[4756]: I1203 11:17:10.856818 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7bmnl" podUID="7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" containerName="registry-server" probeResult="failure" output=< Dec 03 11:17:10 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Dec 03 11:17:10 crc kubenswrapper[4756]: > Dec 03 11:17:10 crc kubenswrapper[4756]: I1203 11:17:10.898314 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ec1943ca-3ada-4b39-884d-a822f2efba3f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 11:17:10 crc kubenswrapper[4756]: I1203 11:17:10.898330 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ec1943ca-3ada-4b39-884d-a822f2efba3f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 11:17:11 crc kubenswrapper[4756]: I1203 11:17:11.830106 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 11:17:11 crc kubenswrapper[4756]: I1203 11:17:11.830156 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 11:17:12 crc kubenswrapper[4756]: I1203 11:17:12.870297 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 11:17:12 crc kubenswrapper[4756]: I1203 11:17:12.912226 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.580352 4756 generic.go:334] "Generic (PLEG): container finished" podID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerID="18860787d480be681c30370fd83acbed4e19b291dc8ed776036be4e8e85eb1ad" exitCode=0 Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.580681 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0e483b8-8712-403c-bac4-f5893d35fc6f","Type":"ContainerDied","Data":"18860787d480be681c30370fd83acbed4e19b291dc8ed776036be4e8e85eb1ad"} Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.715364 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.751065 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-combined-ca-bundle\") pod \"f0e483b8-8712-403c-bac4-f5893d35fc6f\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.751149 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfq66\" (UniqueName: \"kubernetes.io/projected/f0e483b8-8712-403c-bac4-f5893d35fc6f-kube-api-access-pfq66\") pod \"f0e483b8-8712-403c-bac4-f5893d35fc6f\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.751271 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0e483b8-8712-403c-bac4-f5893d35fc6f-log-httpd\") pod \"f0e483b8-8712-403c-bac4-f5893d35fc6f\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.751316 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-sg-core-conf-yaml\") pod \"f0e483b8-8712-403c-bac4-f5893d35fc6f\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.751440 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0e483b8-8712-403c-bac4-f5893d35fc6f-run-httpd\") pod \"f0e483b8-8712-403c-bac4-f5893d35fc6f\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.751464 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-scripts\") pod \"f0e483b8-8712-403c-bac4-f5893d35fc6f\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.751552 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-config-data\") pod \"f0e483b8-8712-403c-bac4-f5893d35fc6f\" (UID: \"f0e483b8-8712-403c-bac4-f5893d35fc6f\") " Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.752666 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0e483b8-8712-403c-bac4-f5893d35fc6f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f0e483b8-8712-403c-bac4-f5893d35fc6f" (UID: "f0e483b8-8712-403c-bac4-f5893d35fc6f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.754082 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0e483b8-8712-403c-bac4-f5893d35fc6f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f0e483b8-8712-403c-bac4-f5893d35fc6f" (UID: "f0e483b8-8712-403c-bac4-f5893d35fc6f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.764395 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-scripts" (OuterVolumeSpecName: "scripts") pod "f0e483b8-8712-403c-bac4-f5893d35fc6f" (UID: "f0e483b8-8712-403c-bac4-f5893d35fc6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.764555 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e483b8-8712-403c-bac4-f5893d35fc6f-kube-api-access-pfq66" (OuterVolumeSpecName: "kube-api-access-pfq66") pod "f0e483b8-8712-403c-bac4-f5893d35fc6f" (UID: "f0e483b8-8712-403c-bac4-f5893d35fc6f"). InnerVolumeSpecName "kube-api-access-pfq66". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.806779 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f0e483b8-8712-403c-bac4-f5893d35fc6f" (UID: "f0e483b8-8712-403c-bac4-f5893d35fc6f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.853485 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfq66\" (UniqueName: \"kubernetes.io/projected/f0e483b8-8712-403c-bac4-f5893d35fc6f-kube-api-access-pfq66\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.853520 4756 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0e483b8-8712-403c-bac4-f5893d35fc6f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.853530 4756 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.853586 4756 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0e483b8-8712-403c-bac4-f5893d35fc6f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.853595 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.868560 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0e483b8-8712-403c-bac4-f5893d35fc6f" (UID: "f0e483b8-8712-403c-bac4-f5893d35fc6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.904603 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-config-data" (OuterVolumeSpecName: "config-data") pod "f0e483b8-8712-403c-bac4-f5893d35fc6f" (UID: "f0e483b8-8712-403c-bac4-f5893d35fc6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.956101 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:13 crc kubenswrapper[4756]: I1203 11:17:13.956137 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e483b8-8712-403c-bac4-f5893d35fc6f-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.597786 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0e483b8-8712-403c-bac4-f5893d35fc6f","Type":"ContainerDied","Data":"d2c8ae5286a06413922f60da9663b679f72fdfb39a0a5d7d041784df5874c1d0"} Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.597974 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.598012 4756 scope.go:117] "RemoveContainer" containerID="69d03621bf33942744d907ee6cbf84126f088eb600eded8e8a97b70fa8a556f0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.632381 4756 scope.go:117] "RemoveContainer" containerID="4b83b11e9c464d963e1ad22c6a87f24141f861a10139d437bcf71b5494486356" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.643491 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.661636 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.665128 4756 scope.go:117] "RemoveContainer" containerID="18860787d480be681c30370fd83acbed4e19b291dc8ed776036be4e8e85eb1ad" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.679325 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:14 crc kubenswrapper[4756]: E1203 11:17:14.680930 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="proxy-httpd" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.681080 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="proxy-httpd" Dec 03 11:17:14 crc kubenswrapper[4756]: E1203 11:17:14.681306 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="ceilometer-central-agent" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.681392 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="ceilometer-central-agent" Dec 03 11:17:14 crc kubenswrapper[4756]: E1203 11:17:14.681506 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="ceilometer-notification-agent" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.681586 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="ceilometer-notification-agent" Dec 03 11:17:14 crc kubenswrapper[4756]: E1203 11:17:14.681672 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="sg-core" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.681739 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="sg-core" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.682136 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="sg-core" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.682244 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="ceilometer-central-agent" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.682335 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="proxy-httpd" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.682415 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" containerName="ceilometer-notification-agent" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.685103 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.694276 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.694633 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.694817 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.703713 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.725285 4756 scope.go:117] "RemoveContainer" containerID="0b4da00108b3a7d9599756253f75f90d7e77bba4e627b71d3b042aa00cac3ced" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.775416 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.775484 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49jfc\" (UniqueName: \"kubernetes.io/projected/62ea1027-e372-4e3b-a0b2-cba870d1b74b-kube-api-access-49jfc\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.775523 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.775550 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62ea1027-e372-4e3b-a0b2-cba870d1b74b-run-httpd\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.775668 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.775781 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-config-data\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.775830 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-scripts\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.775854 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62ea1027-e372-4e3b-a0b2-cba870d1b74b-log-httpd\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.898106 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.898243 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-config-data\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.898288 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-scripts\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.898306 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62ea1027-e372-4e3b-a0b2-cba870d1b74b-log-httpd\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.898350 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.898372 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49jfc\" (UniqueName: \"kubernetes.io/projected/62ea1027-e372-4e3b-a0b2-cba870d1b74b-kube-api-access-49jfc\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.898402 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.898427 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62ea1027-e372-4e3b-a0b2-cba870d1b74b-run-httpd\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.899371 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62ea1027-e372-4e3b-a0b2-cba870d1b74b-run-httpd\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.904072 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62ea1027-e372-4e3b-a0b2-cba870d1b74b-log-httpd\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.909825 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.911298 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-config-data\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.912027 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-scripts\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.913864 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.922900 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49jfc\" (UniqueName: \"kubernetes.io/projected/62ea1027-e372-4e3b-a0b2-cba870d1b74b-kube-api-access-49jfc\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:14 crc kubenswrapper[4756]: I1203 11:17:14.928245 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " pod="openstack/ceilometer-0" Dec 03 11:17:15 crc kubenswrapper[4756]: I1203 11:17:15.033440 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:17:15 crc kubenswrapper[4756]: I1203 11:17:15.250622 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e483b8-8712-403c-bac4-f5893d35fc6f" path="/var/lib/kubelet/pods/f0e483b8-8712-403c-bac4-f5893d35fc6f/volumes" Dec 03 11:17:15 crc kubenswrapper[4756]: I1203 11:17:15.579734 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:15 crc kubenswrapper[4756]: I1203 11:17:15.612939 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62ea1027-e372-4e3b-a0b2-cba870d1b74b","Type":"ContainerStarted","Data":"1477846e9575523c65c2ccb1650e128f52aee85389a6f282f740a7044bf17366"} Dec 03 11:17:15 crc kubenswrapper[4756]: I1203 11:17:15.816684 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 03 11:17:15 crc kubenswrapper[4756]: I1203 11:17:15.917358 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 03 11:17:16 crc kubenswrapper[4756]: I1203 11:17:16.629652 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62ea1027-e372-4e3b-a0b2-cba870d1b74b","Type":"ContainerStarted","Data":"e238a71fc4dc6fbef7d8a6cec123d09aaa71ddd90277b27f6b7c42117ba6476e"} Dec 03 11:17:17 crc kubenswrapper[4756]: I1203 11:17:17.645880 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62ea1027-e372-4e3b-a0b2-cba870d1b74b","Type":"ContainerStarted","Data":"d34181e9c7b113240008d8f6a4907f60850c0afeebc7606a79b965c7a45fec29"} Dec 03 11:17:18 crc kubenswrapper[4756]: I1203 11:17:18.660077 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62ea1027-e372-4e3b-a0b2-cba870d1b74b","Type":"ContainerStarted","Data":"09370e576da4bd8a2007edf0114b87c65fd78e56cf1e5aef78e9a17a7f6d2409"} Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.299961 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gw22s"] Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.302833 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.311761 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gw22s"] Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.407538 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6df1867a-5eed-4be5-a096-4145777df69d-catalog-content\") pod \"community-operators-gw22s\" (UID: \"6df1867a-5eed-4be5-a096-4145777df69d\") " pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.407611 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6df1867a-5eed-4be5-a096-4145777df69d-utilities\") pod \"community-operators-gw22s\" (UID: \"6df1867a-5eed-4be5-a096-4145777df69d\") " pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.407715 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj25s\" (UniqueName: \"kubernetes.io/projected/6df1867a-5eed-4be5-a096-4145777df69d-kube-api-access-cj25s\") pod \"community-operators-gw22s\" (UID: \"6df1867a-5eed-4be5-a096-4145777df69d\") " pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.510017 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6df1867a-5eed-4be5-a096-4145777df69d-catalog-content\") pod \"community-operators-gw22s\" (UID: \"6df1867a-5eed-4be5-a096-4145777df69d\") " pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.510106 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6df1867a-5eed-4be5-a096-4145777df69d-utilities\") pod \"community-operators-gw22s\" (UID: \"6df1867a-5eed-4be5-a096-4145777df69d\") " pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.510204 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj25s\" (UniqueName: \"kubernetes.io/projected/6df1867a-5eed-4be5-a096-4145777df69d-kube-api-access-cj25s\") pod \"community-operators-gw22s\" (UID: \"6df1867a-5eed-4be5-a096-4145777df69d\") " pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.510725 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6df1867a-5eed-4be5-a096-4145777df69d-catalog-content\") pod \"community-operators-gw22s\" (UID: \"6df1867a-5eed-4be5-a096-4145777df69d\") " pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.510983 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6df1867a-5eed-4be5-a096-4145777df69d-utilities\") pod \"community-operators-gw22s\" (UID: \"6df1867a-5eed-4be5-a096-4145777df69d\") " pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.536745 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj25s\" (UniqueName: \"kubernetes.io/projected/6df1867a-5eed-4be5-a096-4145777df69d-kube-api-access-cj25s\") pod \"community-operators-gw22s\" (UID: \"6df1867a-5eed-4be5-a096-4145777df69d\") " pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.629720 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.857290 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.900541 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.900785 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.945418 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.945926 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 11:17:19 crc kubenswrapper[4756]: I1203 11:17:19.979633 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 11:17:20 crc kubenswrapper[4756]: I1203 11:17:20.254665 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gw22s"] Dec 03 11:17:20 crc kubenswrapper[4756]: I1203 11:17:20.687178 4756 generic.go:334] "Generic (PLEG): container finished" podID="6df1867a-5eed-4be5-a096-4145777df69d" containerID="f9a6d52dbe50bec454d94e339a0deb58d206f7f9463499a07b9cd2f8abff7c4e" exitCode=0 Dec 03 11:17:20 crc kubenswrapper[4756]: I1203 11:17:20.687818 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gw22s" event={"ID":"6df1867a-5eed-4be5-a096-4145777df69d","Type":"ContainerDied","Data":"f9a6d52dbe50bec454d94e339a0deb58d206f7f9463499a07b9cd2f8abff7c4e"} Dec 03 11:17:20 crc kubenswrapper[4756]: I1203 11:17:20.687869 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gw22s" event={"ID":"6df1867a-5eed-4be5-a096-4145777df69d","Type":"ContainerStarted","Data":"440a2881164b4725d79a2d2478baca576a17f8ce2ef08db7f672dd9e34b0afa4"} Dec 03 11:17:20 crc kubenswrapper[4756]: I1203 11:17:20.740175 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62ea1027-e372-4e3b-a0b2-cba870d1b74b","Type":"ContainerStarted","Data":"79f942a6105844349e2ad7b444bff5b916f191800c7f3f9d24e1251ff4e6d91f"} Dec 03 11:17:20 crc kubenswrapper[4756]: I1203 11:17:20.741071 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 11:17:20 crc kubenswrapper[4756]: I1203 11:17:20.791129 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.02688497 podStartE2EDuration="6.791097029s" podCreationTimestamp="2025-12-03 11:17:14 +0000 UTC" firstStartedPulling="2025-12-03 11:17:15.594410081 +0000 UTC m=+1446.624411325" lastFinishedPulling="2025-12-03 11:17:20.35862214 +0000 UTC m=+1451.388623384" observedRunningTime="2025-12-03 11:17:20.768793574 +0000 UTC m=+1451.798794828" watchObservedRunningTime="2025-12-03 11:17:20.791097029 +0000 UTC m=+1451.821098283" Dec 03 11:17:21 crc kubenswrapper[4756]: I1203 11:17:21.755046 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gw22s" event={"ID":"6df1867a-5eed-4be5-a096-4145777df69d","Type":"ContainerStarted","Data":"8f36959dc313997b2fb0b231133b2676f6159cfaee594014c920382d5afc3859"} Dec 03 11:17:21 crc kubenswrapper[4756]: I1203 11:17:21.834933 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 11:17:21 crc kubenswrapper[4756]: I1203 11:17:21.836281 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 11:17:21 crc kubenswrapper[4756]: I1203 11:17:21.849231 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 11:17:21 crc kubenswrapper[4756]: I1203 11:17:21.849680 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.418524 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.592746 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2vps\" (UniqueName: \"kubernetes.io/projected/7f3dac4c-efe2-405f-ade3-b2464752ebd4-kube-api-access-f2vps\") pod \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\" (UID: \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\") " Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.592825 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3dac4c-efe2-405f-ade3-b2464752ebd4-config-data\") pod \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\" (UID: \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\") " Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.593046 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3dac4c-efe2-405f-ade3-b2464752ebd4-combined-ca-bundle\") pod \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\" (UID: \"7f3dac4c-efe2-405f-ade3-b2464752ebd4\") " Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.601404 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f3dac4c-efe2-405f-ade3-b2464752ebd4-kube-api-access-f2vps" (OuterVolumeSpecName: "kube-api-access-f2vps") pod "7f3dac4c-efe2-405f-ade3-b2464752ebd4" (UID: "7f3dac4c-efe2-405f-ade3-b2464752ebd4"). InnerVolumeSpecName "kube-api-access-f2vps". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.635069 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3dac4c-efe2-405f-ade3-b2464752ebd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f3dac4c-efe2-405f-ade3-b2464752ebd4" (UID: "7f3dac4c-efe2-405f-ade3-b2464752ebd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.636618 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3dac4c-efe2-405f-ade3-b2464752ebd4-config-data" (OuterVolumeSpecName: "config-data") pod "7f3dac4c-efe2-405f-ade3-b2464752ebd4" (UID: "7f3dac4c-efe2-405f-ade3-b2464752ebd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.697303 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2vps\" (UniqueName: \"kubernetes.io/projected/7f3dac4c-efe2-405f-ade3-b2464752ebd4-kube-api-access-f2vps\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.697355 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3dac4c-efe2-405f-ade3-b2464752ebd4-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.697377 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3dac4c-efe2-405f-ade3-b2464752ebd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.768686 4756 generic.go:334] "Generic (PLEG): container finished" podID="6df1867a-5eed-4be5-a096-4145777df69d" containerID="8f36959dc313997b2fb0b231133b2676f6159cfaee594014c920382d5afc3859" exitCode=0 Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.768790 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gw22s" event={"ID":"6df1867a-5eed-4be5-a096-4145777df69d","Type":"ContainerDied","Data":"8f36959dc313997b2fb0b231133b2676f6159cfaee594014c920382d5afc3859"} Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.771082 4756 generic.go:334] "Generic (PLEG): container finished" podID="7f3dac4c-efe2-405f-ade3-b2464752ebd4" containerID="32804e86654fcb2ed557652a35fad3ee4b631c0de9a58643d1aa3c8cb011871d" exitCode=137 Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.771195 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.771199 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7f3dac4c-efe2-405f-ade3-b2464752ebd4","Type":"ContainerDied","Data":"32804e86654fcb2ed557652a35fad3ee4b631c0de9a58643d1aa3c8cb011871d"} Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.771308 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7f3dac4c-efe2-405f-ade3-b2464752ebd4","Type":"ContainerDied","Data":"837871ff9af0cb32303870680077d39bcb1306b4571cbc18c9b8db801b76bab0"} Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.771380 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.771392 4756 scope.go:117] "RemoveContainer" containerID="32804e86654fcb2ed557652a35fad3ee4b631c0de9a58643d1aa3c8cb011871d" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.783527 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.814469 4756 scope.go:117] "RemoveContainer" containerID="32804e86654fcb2ed557652a35fad3ee4b631c0de9a58643d1aa3c8cb011871d" Dec 03 11:17:22 crc kubenswrapper[4756]: E1203 11:17:22.815072 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32804e86654fcb2ed557652a35fad3ee4b631c0de9a58643d1aa3c8cb011871d\": container with ID starting with 32804e86654fcb2ed557652a35fad3ee4b631c0de9a58643d1aa3c8cb011871d not found: ID does not exist" containerID="32804e86654fcb2ed557652a35fad3ee4b631c0de9a58643d1aa3c8cb011871d" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.815123 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32804e86654fcb2ed557652a35fad3ee4b631c0de9a58643d1aa3c8cb011871d"} err="failed to get container status \"32804e86654fcb2ed557652a35fad3ee4b631c0de9a58643d1aa3c8cb011871d\": rpc error: code = NotFound desc = could not find container \"32804e86654fcb2ed557652a35fad3ee4b631c0de9a58643d1aa3c8cb011871d\": container with ID starting with 32804e86654fcb2ed557652a35fad3ee4b631c0de9a58643d1aa3c8cb011871d not found: ID does not exist" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.861314 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.873409 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.887264 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 11:17:22 crc kubenswrapper[4756]: E1203 11:17:22.887990 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f3dac4c-efe2-405f-ade3-b2464752ebd4" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.888012 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f3dac4c-efe2-405f-ade3-b2464752ebd4" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.888365 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f3dac4c-efe2-405f-ade3-b2464752ebd4" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.890066 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.898054 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.898416 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.898565 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 03 11:17:22 crc kubenswrapper[4756]: I1203 11:17:22.940046 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.005504 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4914547a-e3af-4ae6-8f6b-a210ed169dfd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.005599 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4914547a-e3af-4ae6-8f6b-a210ed169dfd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.005632 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvtb5\" (UniqueName: \"kubernetes.io/projected/4914547a-e3af-4ae6-8f6b-a210ed169dfd-kube-api-access-qvtb5\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.005669 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4914547a-e3af-4ae6-8f6b-a210ed169dfd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.007035 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4914547a-e3af-4ae6-8f6b-a210ed169dfd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.073717 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xlvvh"] Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.083810 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.108731 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4914547a-e3af-4ae6-8f6b-a210ed169dfd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.108786 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvtb5\" (UniqueName: \"kubernetes.io/projected/4914547a-e3af-4ae6-8f6b-a210ed169dfd-kube-api-access-qvtb5\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.108816 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4914547a-e3af-4ae6-8f6b-a210ed169dfd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.108938 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4914547a-e3af-4ae6-8f6b-a210ed169dfd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.109007 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4914547a-e3af-4ae6-8f6b-a210ed169dfd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.118311 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4914547a-e3af-4ae6-8f6b-a210ed169dfd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.118357 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4914547a-e3af-4ae6-8f6b-a210ed169dfd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.126424 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4914547a-e3af-4ae6-8f6b-a210ed169dfd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.141411 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4914547a-e3af-4ae6-8f6b-a210ed169dfd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.152370 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xlvvh"] Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.159093 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvtb5\" (UniqueName: \"kubernetes.io/projected/4914547a-e3af-4ae6-8f6b-a210ed169dfd-kube-api-access-qvtb5\") pod \"nova-cell1-novncproxy-0\" (UID: \"4914547a-e3af-4ae6-8f6b-a210ed169dfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.213575 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-config\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.213694 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.246837 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.247023 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.247193 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.247257 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lvv4\" (UniqueName: \"kubernetes.io/projected/0f54d4af-c690-4860-924b-83eacde4c90c-kube-api-access-5lvv4\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.266478 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.362311 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f3dac4c-efe2-405f-ade3-b2464752ebd4" path="/var/lib/kubelet/pods/7f3dac4c-efe2-405f-ade3-b2464752ebd4/volumes" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.365587 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-config\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.365654 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.365927 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.366309 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.366385 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.366426 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lvv4\" (UniqueName: \"kubernetes.io/projected/0f54d4af-c690-4860-924b-83eacde4c90c-kube-api-access-5lvv4\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.368739 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.371183 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.372149 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.374081 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-config\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.375617 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.408311 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lvv4\" (UniqueName: \"kubernetes.io/projected/0f54d4af-c690-4860-924b-83eacde4c90c-kube-api-access-5lvv4\") pod \"dnsmasq-dns-89c5cd4d5-xlvvh\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.576653 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:23 crc kubenswrapper[4756]: I1203 11:17:23.884091 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 11:17:23 crc kubenswrapper[4756]: W1203 11:17:23.891628 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4914547a_e3af_4ae6_8f6b_a210ed169dfd.slice/crio-6c8e1e21d3d9f913b03aaab2c1d88b4b1c4fb2cb652e40c711ed48fdb9a877f4 WatchSource:0}: Error finding container 6c8e1e21d3d9f913b03aaab2c1d88b4b1c4fb2cb652e40c711ed48fdb9a877f4: Status 404 returned error can't find the container with id 6c8e1e21d3d9f913b03aaab2c1d88b4b1c4fb2cb652e40c711ed48fdb9a877f4 Dec 03 11:17:24 crc kubenswrapper[4756]: W1203 11:17:24.229444 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f54d4af_c690_4860_924b_83eacde4c90c.slice/crio-f925e6ce5d6d8a9070b248de2a21b963d030dbeb10f60bab91b1a0fced0aa560 WatchSource:0}: Error finding container f925e6ce5d6d8a9070b248de2a21b963d030dbeb10f60bab91b1a0fced0aa560: Status 404 returned error can't find the container with id f925e6ce5d6d8a9070b248de2a21b963d030dbeb10f60bab91b1a0fced0aa560 Dec 03 11:17:24 crc kubenswrapper[4756]: I1203 11:17:24.230536 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xlvvh"] Dec 03 11:17:24 crc kubenswrapper[4756]: I1203 11:17:24.492511 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7bmnl"] Dec 03 11:17:24 crc kubenswrapper[4756]: I1203 11:17:24.492868 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7bmnl" podUID="7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" containerName="registry-server" containerID="cri-o://0466aa7d2153575da9355de2dad8009e1ec9121e36fd72cd300cfa2437988063" gracePeriod=2 Dec 03 11:17:24 crc kubenswrapper[4756]: I1203 11:17:24.845689 4756 generic.go:334] "Generic (PLEG): container finished" podID="7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" containerID="0466aa7d2153575da9355de2dad8009e1ec9121e36fd72cd300cfa2437988063" exitCode=0 Dec 03 11:17:24 crc kubenswrapper[4756]: I1203 11:17:24.846077 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bmnl" event={"ID":"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a","Type":"ContainerDied","Data":"0466aa7d2153575da9355de2dad8009e1ec9121e36fd72cd300cfa2437988063"} Dec 03 11:17:24 crc kubenswrapper[4756]: I1203 11:17:24.851234 4756 generic.go:334] "Generic (PLEG): container finished" podID="0f54d4af-c690-4860-924b-83eacde4c90c" containerID="7349e90dea36ebdd316693f48beaf510f617c3a10939ed26a4226ed96204d20c" exitCode=0 Dec 03 11:17:24 crc kubenswrapper[4756]: I1203 11:17:24.852493 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" event={"ID":"0f54d4af-c690-4860-924b-83eacde4c90c","Type":"ContainerDied","Data":"7349e90dea36ebdd316693f48beaf510f617c3a10939ed26a4226ed96204d20c"} Dec 03 11:17:24 crc kubenswrapper[4756]: I1203 11:17:24.852575 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" event={"ID":"0f54d4af-c690-4860-924b-83eacde4c90c","Type":"ContainerStarted","Data":"f925e6ce5d6d8a9070b248de2a21b963d030dbeb10f60bab91b1a0fced0aa560"} Dec 03 11:17:24 crc kubenswrapper[4756]: I1203 11:17:24.862993 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4914547a-e3af-4ae6-8f6b-a210ed169dfd","Type":"ContainerStarted","Data":"50ee4d0f7f235adcc957b431970e3a2e33f60a3d4fd31deb36a23801c8327253"} Dec 03 11:17:24 crc kubenswrapper[4756]: I1203 11:17:24.863068 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4914547a-e3af-4ae6-8f6b-a210ed169dfd","Type":"ContainerStarted","Data":"6c8e1e21d3d9f913b03aaab2c1d88b4b1c4fb2cb652e40c711ed48fdb9a877f4"} Dec 03 11:17:24 crc kubenswrapper[4756]: I1203 11:17:24.895128 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gw22s" event={"ID":"6df1867a-5eed-4be5-a096-4145777df69d","Type":"ContainerStarted","Data":"29645aa3dac658a789950608697e7843d811cfed67d959586d2a72e7a3152d1f"} Dec 03 11:17:24 crc kubenswrapper[4756]: I1203 11:17:24.968344 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.968315198 podStartE2EDuration="2.968315198s" podCreationTimestamp="2025-12-03 11:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:17:24.943447542 +0000 UTC m=+1455.973448796" watchObservedRunningTime="2025-12-03 11:17:24.968315198 +0000 UTC m=+1455.998316442" Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.114593 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gw22s" podStartSLOduration=3.442099654 podStartE2EDuration="6.114565092s" podCreationTimestamp="2025-12-03 11:17:19 +0000 UTC" firstStartedPulling="2025-12-03 11:17:20.690811255 +0000 UTC m=+1451.720812499" lastFinishedPulling="2025-12-03 11:17:23.363276693 +0000 UTC m=+1454.393277937" observedRunningTime="2025-12-03 11:17:25.003231655 +0000 UTC m=+1456.033232909" watchObservedRunningTime="2025-12-03 11:17:25.114565092 +0000 UTC m=+1456.144566336" Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.497152 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.569873 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-utilities\") pod \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\" (UID: \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\") " Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.570056 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-catalog-content\") pod \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\" (UID: \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\") " Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.570253 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzsn4\" (UniqueName: \"kubernetes.io/projected/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-kube-api-access-qzsn4\") pod \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\" (UID: \"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a\") " Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.571397 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-utilities" (OuterVolumeSpecName: "utilities") pod "7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" (UID: "7f6abf4b-a13f-4d08-8a00-5ce0e985b77a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.571916 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.643706 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-kube-api-access-qzsn4" (OuterVolumeSpecName: "kube-api-access-qzsn4") pod "7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" (UID: "7f6abf4b-a13f-4d08-8a00-5ce0e985b77a"). InnerVolumeSpecName "kube-api-access-qzsn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.682471 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzsn4\" (UniqueName: \"kubernetes.io/projected/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-kube-api-access-qzsn4\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.723118 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" (UID: "7f6abf4b-a13f-4d08-8a00-5ce0e985b77a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.785145 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.911362 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7bmnl" event={"ID":"7f6abf4b-a13f-4d08-8a00-5ce0e985b77a","Type":"ContainerDied","Data":"99e17c731d9b36878fb1e89ba09dcebf3cdc966ab081fe9d9e809be94d0df39d"} Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.911450 4756 scope.go:117] "RemoveContainer" containerID="0466aa7d2153575da9355de2dad8009e1ec9121e36fd72cd300cfa2437988063" Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.911630 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7bmnl" Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.919639 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" event={"ID":"0f54d4af-c690-4860-924b-83eacde4c90c","Type":"ContainerStarted","Data":"8af208a63b41b71f0949e39bfef7ad73531b0ae367996fad9f0574d9a9710a01"} Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.978192 4756 scope.go:117] "RemoveContainer" containerID="35f0a4c8b694b0e8faf556b339877dc17700a41b4996e0271f76b9014a1e47de" Dec 03 11:17:25 crc kubenswrapper[4756]: I1203 11:17:25.991681 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" podStartSLOduration=2.9916458759999998 podStartE2EDuration="2.991645876s" podCreationTimestamp="2025-12-03 11:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:17:25.963594463 +0000 UTC m=+1456.993595717" watchObservedRunningTime="2025-12-03 11:17:25.991645876 +0000 UTC m=+1457.021647150" Dec 03 11:17:26 crc kubenswrapper[4756]: I1203 11:17:26.031464 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7bmnl"] Dec 03 11:17:26 crc kubenswrapper[4756]: I1203 11:17:26.047177 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7bmnl"] Dec 03 11:17:26 crc kubenswrapper[4756]: I1203 11:17:26.062197 4756 scope.go:117] "RemoveContainer" containerID="01321217fdb85b134bb1a4d0db7d5d10862730fd41ccf8f474c69d3fc97099df" Dec 03 11:17:26 crc kubenswrapper[4756]: I1203 11:17:26.876571 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:17:26 crc kubenswrapper[4756]: I1203 11:17:26.876990 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" containerName="nova-api-log" containerID="cri-o://a1e1fdb5f105a9535bb74f6c018d41d8a431ff7a555f0f6c2c45f6de3b567063" gracePeriod=30 Dec 03 11:17:26 crc kubenswrapper[4756]: I1203 11:17:26.877701 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" containerName="nova-api-api" containerID="cri-o://f98f9f84b4f861a030abb4975fbb43f6da91d5513978168a1ab7407b92fcad01" gracePeriod=30 Dec 03 11:17:26 crc kubenswrapper[4756]: I1203 11:17:26.938837 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:27 crc kubenswrapper[4756]: I1203 11:17:27.248400 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" path="/var/lib/kubelet/pods/7f6abf4b-a13f-4d08-8a00-5ce0e985b77a/volumes" Dec 03 11:17:27 crc kubenswrapper[4756]: I1203 11:17:27.290452 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:27 crc kubenswrapper[4756]: I1203 11:17:27.290860 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="ceilometer-central-agent" containerID="cri-o://e238a71fc4dc6fbef7d8a6cec123d09aaa71ddd90277b27f6b7c42117ba6476e" gracePeriod=30 Dec 03 11:17:27 crc kubenswrapper[4756]: I1203 11:17:27.291032 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="sg-core" containerID="cri-o://09370e576da4bd8a2007edf0114b87c65fd78e56cf1e5aef78e9a17a7f6d2409" gracePeriod=30 Dec 03 11:17:27 crc kubenswrapper[4756]: I1203 11:17:27.291083 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="ceilometer-notification-agent" containerID="cri-o://d34181e9c7b113240008d8f6a4907f60850c0afeebc7606a79b965c7a45fec29" gracePeriod=30 Dec 03 11:17:27 crc kubenswrapper[4756]: I1203 11:17:27.291093 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="proxy-httpd" containerID="cri-o://79f942a6105844349e2ad7b444bff5b916f191800c7f3f9d24e1251ff4e6d91f" gracePeriod=30 Dec 03 11:17:27 crc kubenswrapper[4756]: I1203 11:17:27.950400 4756 generic.go:334] "Generic (PLEG): container finished" podID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerID="79f942a6105844349e2ad7b444bff5b916f191800c7f3f9d24e1251ff4e6d91f" exitCode=0 Dec 03 11:17:27 crc kubenswrapper[4756]: I1203 11:17:27.950757 4756 generic.go:334] "Generic (PLEG): container finished" podID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerID="09370e576da4bd8a2007edf0114b87c65fd78e56cf1e5aef78e9a17a7f6d2409" exitCode=2 Dec 03 11:17:27 crc kubenswrapper[4756]: I1203 11:17:27.950464 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62ea1027-e372-4e3b-a0b2-cba870d1b74b","Type":"ContainerDied","Data":"79f942a6105844349e2ad7b444bff5b916f191800c7f3f9d24e1251ff4e6d91f"} Dec 03 11:17:27 crc kubenswrapper[4756]: I1203 11:17:27.950837 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62ea1027-e372-4e3b-a0b2-cba870d1b74b","Type":"ContainerDied","Data":"09370e576da4bd8a2007edf0114b87c65fd78e56cf1e5aef78e9a17a7f6d2409"} Dec 03 11:17:27 crc kubenswrapper[4756]: I1203 11:17:27.953426 4756 generic.go:334] "Generic (PLEG): container finished" podID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" containerID="a1e1fdb5f105a9535bb74f6c018d41d8a431ff7a555f0f6c2c45f6de3b567063" exitCode=143 Dec 03 11:17:27 crc kubenswrapper[4756]: I1203 11:17:27.953591 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e","Type":"ContainerDied","Data":"a1e1fdb5f105a9535bb74f6c018d41d8a431ff7a555f0f6c2c45f6de3b567063"} Dec 03 11:17:28 crc kubenswrapper[4756]: I1203 11:17:28.267420 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:28 crc kubenswrapper[4756]: I1203 11:17:28.975678 4756 generic.go:334] "Generic (PLEG): container finished" podID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerID="d34181e9c7b113240008d8f6a4907f60850c0afeebc7606a79b965c7a45fec29" exitCode=0 Dec 03 11:17:28 crc kubenswrapper[4756]: I1203 11:17:28.975732 4756 generic.go:334] "Generic (PLEG): container finished" podID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerID="e238a71fc4dc6fbef7d8a6cec123d09aaa71ddd90277b27f6b7c42117ba6476e" exitCode=0 Dec 03 11:17:28 crc kubenswrapper[4756]: I1203 11:17:28.975767 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62ea1027-e372-4e3b-a0b2-cba870d1b74b","Type":"ContainerDied","Data":"d34181e9c7b113240008d8f6a4907f60850c0afeebc7606a79b965c7a45fec29"} Dec 03 11:17:28 crc kubenswrapper[4756]: I1203 11:17:28.975814 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62ea1027-e372-4e3b-a0b2-cba870d1b74b","Type":"ContainerDied","Data":"e238a71fc4dc6fbef7d8a6cec123d09aaa71ddd90277b27f6b7c42117ba6476e"} Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.408747 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.501818 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-scripts\") pod \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.501877 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-config-data\") pod \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.501915 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-ceilometer-tls-certs\") pod \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.501976 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62ea1027-e372-4e3b-a0b2-cba870d1b74b-log-httpd\") pod \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.502039 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62ea1027-e372-4e3b-a0b2-cba870d1b74b-run-httpd\") pod \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.502078 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-sg-core-conf-yaml\") pod \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.502256 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-combined-ca-bundle\") pod \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.502335 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49jfc\" (UniqueName: \"kubernetes.io/projected/62ea1027-e372-4e3b-a0b2-cba870d1b74b-kube-api-access-49jfc\") pod \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\" (UID: \"62ea1027-e372-4e3b-a0b2-cba870d1b74b\") " Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.502785 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62ea1027-e372-4e3b-a0b2-cba870d1b74b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "62ea1027-e372-4e3b-a0b2-cba870d1b74b" (UID: "62ea1027-e372-4e3b-a0b2-cba870d1b74b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.502857 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62ea1027-e372-4e3b-a0b2-cba870d1b74b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "62ea1027-e372-4e3b-a0b2-cba870d1b74b" (UID: "62ea1027-e372-4e3b-a0b2-cba870d1b74b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.514092 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-scripts" (OuterVolumeSpecName: "scripts") pod "62ea1027-e372-4e3b-a0b2-cba870d1b74b" (UID: "62ea1027-e372-4e3b-a0b2-cba870d1b74b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.532149 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ea1027-e372-4e3b-a0b2-cba870d1b74b-kube-api-access-49jfc" (OuterVolumeSpecName: "kube-api-access-49jfc") pod "62ea1027-e372-4e3b-a0b2-cba870d1b74b" (UID: "62ea1027-e372-4e3b-a0b2-cba870d1b74b"). InnerVolumeSpecName "kube-api-access-49jfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.556181 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "62ea1027-e372-4e3b-a0b2-cba870d1b74b" (UID: "62ea1027-e372-4e3b-a0b2-cba870d1b74b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.585878 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "62ea1027-e372-4e3b-a0b2-cba870d1b74b" (UID: "62ea1027-e372-4e3b-a0b2-cba870d1b74b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.605110 4756 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.605140 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49jfc\" (UniqueName: \"kubernetes.io/projected/62ea1027-e372-4e3b-a0b2-cba870d1b74b-kube-api-access-49jfc\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.605153 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.605164 4756 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.605172 4756 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62ea1027-e372-4e3b-a0b2-cba870d1b74b-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.605183 4756 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62ea1027-e372-4e3b-a0b2-cba870d1b74b-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.623343 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62ea1027-e372-4e3b-a0b2-cba870d1b74b" (UID: "62ea1027-e372-4e3b-a0b2-cba870d1b74b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.630303 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.630398 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.648793 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-config-data" (OuterVolumeSpecName: "config-data") pod "62ea1027-e372-4e3b-a0b2-cba870d1b74b" (UID: "62ea1027-e372-4e3b-a0b2-cba870d1b74b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.696364 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.708013 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.708057 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62ea1027-e372-4e3b-a0b2-cba870d1b74b-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.989676 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62ea1027-e372-4e3b-a0b2-cba870d1b74b","Type":"ContainerDied","Data":"1477846e9575523c65c2ccb1650e128f52aee85389a6f282f740a7044bf17366"} Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.989764 4756 scope.go:117] "RemoveContainer" containerID="79f942a6105844349e2ad7b444bff5b916f191800c7f3f9d24e1251ff4e6d91f" Dec 03 11:17:29 crc kubenswrapper[4756]: I1203 11:17:29.989706 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.025356 4756 scope.go:117] "RemoveContainer" containerID="09370e576da4bd8a2007edf0114b87c65fd78e56cf1e5aef78e9a17a7f6d2409" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.038623 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.061575 4756 scope.go:117] "RemoveContainer" containerID="d34181e9c7b113240008d8f6a4907f60850c0afeebc7606a79b965c7a45fec29" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.078391 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.093028 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.121967 4756 scope.go:117] "RemoveContainer" containerID="e238a71fc4dc6fbef7d8a6cec123d09aaa71ddd90277b27f6b7c42117ba6476e" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.144057 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:30 crc kubenswrapper[4756]: E1203 11:17:30.145002 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" containerName="registry-server" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.145042 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" containerName="registry-server" Dec 03 11:17:30 crc kubenswrapper[4756]: E1203 11:17:30.145085 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="ceilometer-notification-agent" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.145410 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="ceilometer-notification-agent" Dec 03 11:17:30 crc kubenswrapper[4756]: E1203 11:17:30.145443 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="ceilometer-central-agent" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.145455 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="ceilometer-central-agent" Dec 03 11:17:30 crc kubenswrapper[4756]: E1203 11:17:30.145466 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="proxy-httpd" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.145474 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="proxy-httpd" Dec 03 11:17:30 crc kubenswrapper[4756]: E1203 11:17:30.145503 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" containerName="extract-utilities" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.145516 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" containerName="extract-utilities" Dec 03 11:17:30 crc kubenswrapper[4756]: E1203 11:17:30.145550 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="sg-core" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.145558 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="sg-core" Dec 03 11:17:30 crc kubenswrapper[4756]: E1203 11:17:30.146180 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" containerName="extract-content" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.146192 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" containerName="extract-content" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.146802 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f6abf4b-a13f-4d08-8a00-5ce0e985b77a" containerName="registry-server" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.146834 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="ceilometer-notification-agent" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.146865 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="ceilometer-central-agent" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.146879 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="sg-core" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.146903 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" containerName="proxy-httpd" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.157685 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.166290 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.166364 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.167758 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.181764 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.219901 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.220175 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.220415 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-config-data\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.220711 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh77t\" (UniqueName: \"kubernetes.io/projected/a02f2929-8462-4790-a8fd-810400f3c3b8-kube-api-access-wh77t\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.220811 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.221006 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a02f2929-8462-4790-a8fd-810400f3c3b8-log-httpd\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.221074 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a02f2929-8462-4790-a8fd-810400f3c3b8-run-httpd\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.221149 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-scripts\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.323091 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a02f2929-8462-4790-a8fd-810400f3c3b8-log-httpd\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.323154 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a02f2929-8462-4790-a8fd-810400f3c3b8-run-httpd\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.323182 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-scripts\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.323251 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.323282 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.323316 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-config-data\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.323392 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh77t\" (UniqueName: \"kubernetes.io/projected/a02f2929-8462-4790-a8fd-810400f3c3b8-kube-api-access-wh77t\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.323419 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.323591 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a02f2929-8462-4790-a8fd-810400f3c3b8-log-httpd\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.323854 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a02f2929-8462-4790-a8fd-810400f3c3b8-run-httpd\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.328250 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.328461 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.329540 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.330530 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-scripts\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.331251 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-config-data\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.351896 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh77t\" (UniqueName: \"kubernetes.io/projected/a02f2929-8462-4790-a8fd-810400f3c3b8-kube-api-access-wh77t\") pod \"ceilometer-0\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.492315 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:17:30 crc kubenswrapper[4756]: I1203 11:17:30.968247 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:31 crc kubenswrapper[4756]: I1203 11:17:31.006586 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a02f2929-8462-4790-a8fd-810400f3c3b8","Type":"ContainerStarted","Data":"2b88d2950055df502f609a29361e7ee9e06a1decc924a2584c19f3837c99ce36"} Dec 03 11:17:31 crc kubenswrapper[4756]: I1203 11:17:31.080729 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gw22s"] Dec 03 11:17:31 crc kubenswrapper[4756]: I1203 11:17:31.246888 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ea1027-e372-4e3b-a0b2-cba870d1b74b" path="/var/lib/kubelet/pods/62ea1027-e372-4e3b-a0b2-cba870d1b74b/volumes" Dec 03 11:17:31 crc kubenswrapper[4756]: I1203 11:17:31.412990 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:31 crc kubenswrapper[4756]: I1203 11:17:31.830505 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": dial tcp 10.217.0.195:8774: connect: connection refused" Dec 03 11:17:31 crc kubenswrapper[4756]: I1203 11:17:31.831123 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": dial tcp 10.217.0.195:8774: connect: connection refused" Dec 03 11:17:32 crc kubenswrapper[4756]: I1203 11:17:32.021335 4756 generic.go:334] "Generic (PLEG): container finished" podID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" containerID="f98f9f84b4f861a030abb4975fbb43f6da91d5513978168a1ab7407b92fcad01" exitCode=0 Dec 03 11:17:32 crc kubenswrapper[4756]: I1203 11:17:32.021458 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e","Type":"ContainerDied","Data":"f98f9f84b4f861a030abb4975fbb43f6da91d5513978168a1ab7407b92fcad01"} Dec 03 11:17:32 crc kubenswrapper[4756]: I1203 11:17:32.021616 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gw22s" podUID="6df1867a-5eed-4be5-a096-4145777df69d" containerName="registry-server" containerID="cri-o://29645aa3dac658a789950608697e7843d811cfed67d959586d2a72e7a3152d1f" gracePeriod=2 Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.037478 4756 generic.go:334] "Generic (PLEG): container finished" podID="6df1867a-5eed-4be5-a096-4145777df69d" containerID="29645aa3dac658a789950608697e7843d811cfed67d959586d2a72e7a3152d1f" exitCode=0 Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.037553 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gw22s" event={"ID":"6df1867a-5eed-4be5-a096-4145777df69d","Type":"ContainerDied","Data":"29645aa3dac658a789950608697e7843d811cfed67d959586d2a72e7a3152d1f"} Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.194820 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.267417 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.305922 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-logs\") pod \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.306030 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gs9b\" (UniqueName: \"kubernetes.io/projected/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-kube-api-access-2gs9b\") pod \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.306205 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-combined-ca-bundle\") pod \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.306244 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-config-data\") pod \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\" (UID: \"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e\") " Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.306605 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-logs" (OuterVolumeSpecName: "logs") pod "2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" (UID: "2d8d1bfe-24ca-4cde-9038-a3a01ebd807e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.306919 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.310296 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-kube-api-access-2gs9b" (OuterVolumeSpecName: "kube-api-access-2gs9b") pod "2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" (UID: "2d8d1bfe-24ca-4cde-9038-a3a01ebd807e"). InnerVolumeSpecName "kube-api-access-2gs9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.321830 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.351761 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" (UID: "2d8d1bfe-24ca-4cde-9038-a3a01ebd807e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.361118 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-config-data" (OuterVolumeSpecName: "config-data") pod "2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" (UID: "2d8d1bfe-24ca-4cde-9038-a3a01ebd807e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.408899 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gs9b\" (UniqueName: \"kubernetes.io/projected/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-kube-api-access-2gs9b\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.408943 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.408976 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.579226 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.707545 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-f5wpg"] Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.708095 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" podUID="805494b2-fcad-4208-895c-4b2708a9d129" containerName="dnsmasq-dns" containerID="cri-o://dcd3bff036886b61e12db8c718f70ff40285a681f76eccc016126fc7346349ca" gracePeriod=10 Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.732476 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.820045 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6df1867a-5eed-4be5-a096-4145777df69d-utilities\") pod \"6df1867a-5eed-4be5-a096-4145777df69d\" (UID: \"6df1867a-5eed-4be5-a096-4145777df69d\") " Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.820126 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj25s\" (UniqueName: \"kubernetes.io/projected/6df1867a-5eed-4be5-a096-4145777df69d-kube-api-access-cj25s\") pod \"6df1867a-5eed-4be5-a096-4145777df69d\" (UID: \"6df1867a-5eed-4be5-a096-4145777df69d\") " Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.820489 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6df1867a-5eed-4be5-a096-4145777df69d-catalog-content\") pod \"6df1867a-5eed-4be5-a096-4145777df69d\" (UID: \"6df1867a-5eed-4be5-a096-4145777df69d\") " Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.821458 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6df1867a-5eed-4be5-a096-4145777df69d-utilities" (OuterVolumeSpecName: "utilities") pod "6df1867a-5eed-4be5-a096-4145777df69d" (UID: "6df1867a-5eed-4be5-a096-4145777df69d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.828191 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6df1867a-5eed-4be5-a096-4145777df69d-kube-api-access-cj25s" (OuterVolumeSpecName: "kube-api-access-cj25s") pod "6df1867a-5eed-4be5-a096-4145777df69d" (UID: "6df1867a-5eed-4be5-a096-4145777df69d"). InnerVolumeSpecName "kube-api-access-cj25s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.897371 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" podUID="805494b2-fcad-4208-895c-4b2708a9d129" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.188:5353: connect: connection refused" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.904231 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6df1867a-5eed-4be5-a096-4145777df69d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6df1867a-5eed-4be5-a096-4145777df69d" (UID: "6df1867a-5eed-4be5-a096-4145777df69d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.922790 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6df1867a-5eed-4be5-a096-4145777df69d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.922824 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6df1867a-5eed-4be5-a096-4145777df69d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:33 crc kubenswrapper[4756]: I1203 11:17:33.922834 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj25s\" (UniqueName: \"kubernetes.io/projected/6df1867a-5eed-4be5-a096-4145777df69d-kube-api-access-cj25s\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.064365 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a02f2929-8462-4790-a8fd-810400f3c3b8","Type":"ContainerStarted","Data":"c10a7a78462cdc3426281080d197a61f860a98af4e168d51dc575399c21277eb"} Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.070070 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gw22s" event={"ID":"6df1867a-5eed-4be5-a096-4145777df69d","Type":"ContainerDied","Data":"440a2881164b4725d79a2d2478baca576a17f8ce2ef08db7f672dd9e34b0afa4"} Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.070411 4756 scope.go:117] "RemoveContainer" containerID="29645aa3dac658a789950608697e7843d811cfed67d959586d2a72e7a3152d1f" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.070280 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gw22s" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.074485 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d8d1bfe-24ca-4cde-9038-a3a01ebd807e","Type":"ContainerDied","Data":"6699e165be5fed2f3e67ce143905a275e147ca8e3c7a7a694a2ac1c4f7a8f316"} Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.074655 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.089297 4756 generic.go:334] "Generic (PLEG): container finished" podID="805494b2-fcad-4208-895c-4b2708a9d129" containerID="dcd3bff036886b61e12db8c718f70ff40285a681f76eccc016126fc7346349ca" exitCode=0 Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.089430 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" event={"ID":"805494b2-fcad-4208-895c-4b2708a9d129","Type":"ContainerDied","Data":"dcd3bff036886b61e12db8c718f70ff40285a681f76eccc016126fc7346349ca"} Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.107213 4756 scope.go:117] "RemoveContainer" containerID="8f36959dc313997b2fb0b231133b2676f6159cfaee594014c920382d5afc3859" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.117947 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.253926 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.297464 4756 scope.go:117] "RemoveContainer" containerID="f9a6d52dbe50bec454d94e339a0deb58d206f7f9463499a07b9cd2f8abff7c4e" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.302406 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.327026 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gw22s"] Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.364258 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gw22s"] Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.368860 4756 scope.go:117] "RemoveContainer" containerID="f98f9f84b4f861a030abb4975fbb43f6da91d5513978168a1ab7407b92fcad01" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.377159 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 11:17:34 crc kubenswrapper[4756]: E1203 11:17:34.377655 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df1867a-5eed-4be5-a096-4145777df69d" containerName="extract-utilities" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.377674 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df1867a-5eed-4be5-a096-4145777df69d" containerName="extract-utilities" Dec 03 11:17:34 crc kubenswrapper[4756]: E1203 11:17:34.377724 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" containerName="nova-api-log" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.377733 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" containerName="nova-api-log" Dec 03 11:17:34 crc kubenswrapper[4756]: E1203 11:17:34.377748 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" containerName="nova-api-api" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.377755 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" containerName="nova-api-api" Dec 03 11:17:34 crc kubenswrapper[4756]: E1203 11:17:34.377826 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df1867a-5eed-4be5-a096-4145777df69d" containerName="extract-content" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.377836 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df1867a-5eed-4be5-a096-4145777df69d" containerName="extract-content" Dec 03 11:17:34 crc kubenswrapper[4756]: E1203 11:17:34.377848 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df1867a-5eed-4be5-a096-4145777df69d" containerName="registry-server" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.377855 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df1867a-5eed-4be5-a096-4145777df69d" containerName="registry-server" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.378345 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" containerName="nova-api-api" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.378373 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df1867a-5eed-4be5-a096-4145777df69d" containerName="registry-server" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.378396 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" containerName="nova-api-log" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.379736 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.382764 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.382853 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.382935 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.405722 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.438527 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-config-data\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.438593 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/959b66e9-f960-456d-8aeb-f7eb63d55b23-logs\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.438657 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-internal-tls-certs\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.438707 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.438777 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-public-tls-certs\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.438833 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpf49\" (UniqueName: \"kubernetes.io/projected/959b66e9-f960-456d-8aeb-f7eb63d55b23-kube-api-access-zpf49\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.461390 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-xt2f8"] Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.463281 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.468704 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.468733 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.473432 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.473730 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xt2f8"] Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.480055 4756 scope.go:117] "RemoveContainer" containerID="a1e1fdb5f105a9535bb74f6c018d41d8a431ff7a555f0f6c2c45f6de3b567063" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.540107 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-ovsdbserver-nb\") pod \"805494b2-fcad-4208-895c-4b2708a9d129\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.540241 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-ovsdbserver-sb\") pod \"805494b2-fcad-4208-895c-4b2708a9d129\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.540416 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-dns-swift-storage-0\") pod \"805494b2-fcad-4208-895c-4b2708a9d129\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.540533 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-config\") pod \"805494b2-fcad-4208-895c-4b2708a9d129\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.540636 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdmrj\" (UniqueName: \"kubernetes.io/projected/805494b2-fcad-4208-895c-4b2708a9d129-kube-api-access-gdmrj\") pod \"805494b2-fcad-4208-895c-4b2708a9d129\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.540687 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-dns-svc\") pod \"805494b2-fcad-4208-895c-4b2708a9d129\" (UID: \"805494b2-fcad-4208-895c-4b2708a9d129\") " Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.541197 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t75kt\" (UniqueName: \"kubernetes.io/projected/79717969-f844-4ec0-935f-2e2886597684-kube-api-access-t75kt\") pod \"nova-cell1-cell-mapping-xt2f8\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.541268 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-internal-tls-certs\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.541462 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-config-data\") pod \"nova-cell1-cell-mapping-xt2f8\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.541499 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xt2f8\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.541530 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.541589 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-public-tls-certs\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.541653 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpf49\" (UniqueName: \"kubernetes.io/projected/959b66e9-f960-456d-8aeb-f7eb63d55b23-kube-api-access-zpf49\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.541687 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-scripts\") pod \"nova-cell1-cell-mapping-xt2f8\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.541838 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-config-data\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.541871 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/959b66e9-f960-456d-8aeb-f7eb63d55b23-logs\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.542366 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/959b66e9-f960-456d-8aeb-f7eb63d55b23-logs\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.556182 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-public-tls-certs\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.576347 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.576604 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-internal-tls-certs\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.581659 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-config-data\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.585810 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/805494b2-fcad-4208-895c-4b2708a9d129-kube-api-access-gdmrj" (OuterVolumeSpecName: "kube-api-access-gdmrj") pod "805494b2-fcad-4208-895c-4b2708a9d129" (UID: "805494b2-fcad-4208-895c-4b2708a9d129"). InnerVolumeSpecName "kube-api-access-gdmrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.588456 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpf49\" (UniqueName: \"kubernetes.io/projected/959b66e9-f960-456d-8aeb-f7eb63d55b23-kube-api-access-zpf49\") pod \"nova-api-0\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.647996 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t75kt\" (UniqueName: \"kubernetes.io/projected/79717969-f844-4ec0-935f-2e2886597684-kube-api-access-t75kt\") pod \"nova-cell1-cell-mapping-xt2f8\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.648102 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-config-data\") pod \"nova-cell1-cell-mapping-xt2f8\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.648146 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xt2f8\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.653792 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-scripts\") pod \"nova-cell1-cell-mapping-xt2f8\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.654190 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-config-data\") pod \"nova-cell1-cell-mapping-xt2f8\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.654362 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdmrj\" (UniqueName: \"kubernetes.io/projected/805494b2-fcad-4208-895c-4b2708a9d129-kube-api-access-gdmrj\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.656000 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xt2f8\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.664975 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-scripts\") pod \"nova-cell1-cell-mapping-xt2f8\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.672388 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t75kt\" (UniqueName: \"kubernetes.io/projected/79717969-f844-4ec0-935f-2e2886597684-kube-api-access-t75kt\") pod \"nova-cell1-cell-mapping-xt2f8\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.712285 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "805494b2-fcad-4208-895c-4b2708a9d129" (UID: "805494b2-fcad-4208-895c-4b2708a9d129"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.722360 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "805494b2-fcad-4208-895c-4b2708a9d129" (UID: "805494b2-fcad-4208-895c-4b2708a9d129"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.725489 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "805494b2-fcad-4208-895c-4b2708a9d129" (UID: "805494b2-fcad-4208-895c-4b2708a9d129"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.739450 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-config" (OuterVolumeSpecName: "config") pod "805494b2-fcad-4208-895c-4b2708a9d129" (UID: "805494b2-fcad-4208-895c-4b2708a9d129"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.746879 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "805494b2-fcad-4208-895c-4b2708a9d129" (UID: "805494b2-fcad-4208-895c-4b2708a9d129"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.756408 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.756444 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.756458 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.756467 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.756477 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/805494b2-fcad-4208-895c-4b2708a9d129-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.786879 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 11:17:34 crc kubenswrapper[4756]: I1203 11:17:34.873442 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:35 crc kubenswrapper[4756]: I1203 11:17:35.111621 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" event={"ID":"805494b2-fcad-4208-895c-4b2708a9d129","Type":"ContainerDied","Data":"b7c991f5e93a06dd74eb990fa384da2991446fe2a5124c44f9d85323202772f1"} Dec 03 11:17:35 crc kubenswrapper[4756]: I1203 11:17:35.112027 4756 scope.go:117] "RemoveContainer" containerID="dcd3bff036886b61e12db8c718f70ff40285a681f76eccc016126fc7346349ca" Dec 03 11:17:35 crc kubenswrapper[4756]: I1203 11:17:35.112220 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-f5wpg" Dec 03 11:17:35 crc kubenswrapper[4756]: I1203 11:17:35.112408 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:17:35 crc kubenswrapper[4756]: I1203 11:17:35.117343 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a02f2929-8462-4790-a8fd-810400f3c3b8","Type":"ContainerStarted","Data":"5e7f58e74200fd56673d0f95901af6735bc23f3c2c75a288ccfaaae1affa7ae8"} Dec 03 11:17:35 crc kubenswrapper[4756]: W1203 11:17:35.126427 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod959b66e9_f960_456d_8aeb_f7eb63d55b23.slice/crio-74cac8b8ae3b97074c1cb9df4acc97404a5e4160bdb19500a64ada24e76edbd9 WatchSource:0}: Error finding container 74cac8b8ae3b97074c1cb9df4acc97404a5e4160bdb19500a64ada24e76edbd9: Status 404 returned error can't find the container with id 74cac8b8ae3b97074c1cb9df4acc97404a5e4160bdb19500a64ada24e76edbd9 Dec 03 11:17:35 crc kubenswrapper[4756]: I1203 11:17:35.267632 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d8d1bfe-24ca-4cde-9038-a3a01ebd807e" path="/var/lib/kubelet/pods/2d8d1bfe-24ca-4cde-9038-a3a01ebd807e/volumes" Dec 03 11:17:35 crc kubenswrapper[4756]: I1203 11:17:35.268474 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6df1867a-5eed-4be5-a096-4145777df69d" path="/var/lib/kubelet/pods/6df1867a-5eed-4be5-a096-4145777df69d/volumes" Dec 03 11:17:35 crc kubenswrapper[4756]: I1203 11:17:35.283027 4756 scope.go:117] "RemoveContainer" containerID="88533c338ed62592616b04a9aa966f0e8401c3f5cf7a1f1383f692d2b6709d60" Dec 03 11:17:35 crc kubenswrapper[4756]: I1203 11:17:35.314028 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-f5wpg"] Dec 03 11:17:35 crc kubenswrapper[4756]: I1203 11:17:35.326271 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-f5wpg"] Dec 03 11:17:35 crc kubenswrapper[4756]: I1203 11:17:35.494473 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xt2f8"] Dec 03 11:17:35 crc kubenswrapper[4756]: W1203 11:17:35.499210 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79717969_f844_4ec0_935f_2e2886597684.slice/crio-379fdd8cacb1ee5e00da29cc9ab7391e4e9cbc4f5ff212ef2772561227ee611b WatchSource:0}: Error finding container 379fdd8cacb1ee5e00da29cc9ab7391e4e9cbc4f5ff212ef2772561227ee611b: Status 404 returned error can't find the container with id 379fdd8cacb1ee5e00da29cc9ab7391e4e9cbc4f5ff212ef2772561227ee611b Dec 03 11:17:36 crc kubenswrapper[4756]: I1203 11:17:36.138316 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xt2f8" event={"ID":"79717969-f844-4ec0-935f-2e2886597684","Type":"ContainerStarted","Data":"6859e503d811c71c3b86193738a8438ff3b2257d38bea4d5194f8be348f8ea63"} Dec 03 11:17:36 crc kubenswrapper[4756]: I1203 11:17:36.138864 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xt2f8" event={"ID":"79717969-f844-4ec0-935f-2e2886597684","Type":"ContainerStarted","Data":"379fdd8cacb1ee5e00da29cc9ab7391e4e9cbc4f5ff212ef2772561227ee611b"} Dec 03 11:17:36 crc kubenswrapper[4756]: I1203 11:17:36.143540 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"959b66e9-f960-456d-8aeb-f7eb63d55b23","Type":"ContainerStarted","Data":"72284c79d8ff3b284c6c79e7c1fe10c8d7ce02be8ecc67e8868d635c35c20fa7"} Dec 03 11:17:36 crc kubenswrapper[4756]: I1203 11:17:36.143592 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"959b66e9-f960-456d-8aeb-f7eb63d55b23","Type":"ContainerStarted","Data":"74cac8b8ae3b97074c1cb9df4acc97404a5e4160bdb19500a64ada24e76edbd9"} Dec 03 11:17:37 crc kubenswrapper[4756]: I1203 11:17:37.159046 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a02f2929-8462-4790-a8fd-810400f3c3b8","Type":"ContainerStarted","Data":"4af7019dd326222cf11c4033e74720d9987b1e883e5fdc66d044a96aaac7e3f7"} Dec 03 11:17:37 crc kubenswrapper[4756]: I1203 11:17:37.162882 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"959b66e9-f960-456d-8aeb-f7eb63d55b23","Type":"ContainerStarted","Data":"be9f938b5da0a9c0f955e22cb1b0a16dca89a22017090c35aaa0d32968cd541d"} Dec 03 11:17:37 crc kubenswrapper[4756]: I1203 11:17:37.190582 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-xt2f8" podStartSLOduration=3.190559288 podStartE2EDuration="3.190559288s" podCreationTimestamp="2025-12-03 11:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:17:37.182029542 +0000 UTC m=+1468.212030786" watchObservedRunningTime="2025-12-03 11:17:37.190559288 +0000 UTC m=+1468.220560532" Dec 03 11:17:37 crc kubenswrapper[4756]: I1203 11:17:37.248439 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="805494b2-fcad-4208-895c-4b2708a9d129" path="/var/lib/kubelet/pods/805494b2-fcad-4208-895c-4b2708a9d129/volumes" Dec 03 11:17:38 crc kubenswrapper[4756]: I1203 11:17:38.204781 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.204746232 podStartE2EDuration="4.204746232s" podCreationTimestamp="2025-12-03 11:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:17:38.196890297 +0000 UTC m=+1469.226891551" watchObservedRunningTime="2025-12-03 11:17:38.204746232 +0000 UTC m=+1469.234747476" Dec 03 11:17:39 crc kubenswrapper[4756]: I1203 11:17:39.191238 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a02f2929-8462-4790-a8fd-810400f3c3b8","Type":"ContainerStarted","Data":"d809a075f351d5fd48103b97be32171a4941f809454b474267f0a61ec9065ade"} Dec 03 11:17:39 crc kubenswrapper[4756]: I1203 11:17:39.191795 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="ceilometer-notification-agent" containerID="cri-o://5e7f58e74200fd56673d0f95901af6735bc23f3c2c75a288ccfaaae1affa7ae8" gracePeriod=30 Dec 03 11:17:39 crc kubenswrapper[4756]: I1203 11:17:39.191686 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="sg-core" containerID="cri-o://4af7019dd326222cf11c4033e74720d9987b1e883e5fdc66d044a96aaac7e3f7" gracePeriod=30 Dec 03 11:17:39 crc kubenswrapper[4756]: I1203 11:17:39.191713 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="proxy-httpd" containerID="cri-o://d809a075f351d5fd48103b97be32171a4941f809454b474267f0a61ec9065ade" gracePeriod=30 Dec 03 11:17:39 crc kubenswrapper[4756]: I1203 11:17:39.191808 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 11:17:39 crc kubenswrapper[4756]: I1203 11:17:39.193002 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="ceilometer-central-agent" containerID="cri-o://c10a7a78462cdc3426281080d197a61f860a98af4e168d51dc575399c21277eb" gracePeriod=30 Dec 03 11:17:39 crc kubenswrapper[4756]: I1203 11:17:39.228425 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.741123789 podStartE2EDuration="9.228400041s" podCreationTimestamp="2025-12-03 11:17:30 +0000 UTC" firstStartedPulling="2025-12-03 11:17:30.974871166 +0000 UTC m=+1462.004872410" lastFinishedPulling="2025-12-03 11:17:38.462147418 +0000 UTC m=+1469.492148662" observedRunningTime="2025-12-03 11:17:39.223392635 +0000 UTC m=+1470.253393909" watchObservedRunningTime="2025-12-03 11:17:39.228400041 +0000 UTC m=+1470.258401285" Dec 03 11:17:40 crc kubenswrapper[4756]: I1203 11:17:40.205360 4756 generic.go:334] "Generic (PLEG): container finished" podID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerID="d809a075f351d5fd48103b97be32171a4941f809454b474267f0a61ec9065ade" exitCode=0 Dec 03 11:17:40 crc kubenswrapper[4756]: I1203 11:17:40.206039 4756 generic.go:334] "Generic (PLEG): container finished" podID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerID="4af7019dd326222cf11c4033e74720d9987b1e883e5fdc66d044a96aaac7e3f7" exitCode=2 Dec 03 11:17:40 crc kubenswrapper[4756]: I1203 11:17:40.206060 4756 generic.go:334] "Generic (PLEG): container finished" podID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerID="5e7f58e74200fd56673d0f95901af6735bc23f3c2c75a288ccfaaae1affa7ae8" exitCode=0 Dec 03 11:17:40 crc kubenswrapper[4756]: I1203 11:17:40.205431 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a02f2929-8462-4790-a8fd-810400f3c3b8","Type":"ContainerDied","Data":"d809a075f351d5fd48103b97be32171a4941f809454b474267f0a61ec9065ade"} Dec 03 11:17:40 crc kubenswrapper[4756]: I1203 11:17:40.206122 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a02f2929-8462-4790-a8fd-810400f3c3b8","Type":"ContainerDied","Data":"4af7019dd326222cf11c4033e74720d9987b1e883e5fdc66d044a96aaac7e3f7"} Dec 03 11:17:40 crc kubenswrapper[4756]: I1203 11:17:40.206144 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a02f2929-8462-4790-a8fd-810400f3c3b8","Type":"ContainerDied","Data":"5e7f58e74200fd56673d0f95901af6735bc23f3c2c75a288ccfaaae1affa7ae8"} Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.219565 4756 generic.go:334] "Generic (PLEG): container finished" podID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerID="c10a7a78462cdc3426281080d197a61f860a98af4e168d51dc575399c21277eb" exitCode=0 Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.220052 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a02f2929-8462-4790-a8fd-810400f3c3b8","Type":"ContainerDied","Data":"c10a7a78462cdc3426281080d197a61f860a98af4e168d51dc575399c21277eb"} Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.836240 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.864374 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a02f2929-8462-4790-a8fd-810400f3c3b8-log-httpd\") pod \"a02f2929-8462-4790-a8fd-810400f3c3b8\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.864489 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a02f2929-8462-4790-a8fd-810400f3c3b8-run-httpd\") pod \"a02f2929-8462-4790-a8fd-810400f3c3b8\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.864521 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-sg-core-conf-yaml\") pod \"a02f2929-8462-4790-a8fd-810400f3c3b8\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.864558 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-scripts\") pod \"a02f2929-8462-4790-a8fd-810400f3c3b8\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.864618 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-combined-ca-bundle\") pod \"a02f2929-8462-4790-a8fd-810400f3c3b8\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.864674 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-config-data\") pod \"a02f2929-8462-4790-a8fd-810400f3c3b8\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.864714 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-ceilometer-tls-certs\") pod \"a02f2929-8462-4790-a8fd-810400f3c3b8\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.864764 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh77t\" (UniqueName: \"kubernetes.io/projected/a02f2929-8462-4790-a8fd-810400f3c3b8-kube-api-access-wh77t\") pod \"a02f2929-8462-4790-a8fd-810400f3c3b8\" (UID: \"a02f2929-8462-4790-a8fd-810400f3c3b8\") " Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.867364 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a02f2929-8462-4790-a8fd-810400f3c3b8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a02f2929-8462-4790-a8fd-810400f3c3b8" (UID: "a02f2929-8462-4790-a8fd-810400f3c3b8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.868644 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a02f2929-8462-4790-a8fd-810400f3c3b8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a02f2929-8462-4790-a8fd-810400f3c3b8" (UID: "a02f2929-8462-4790-a8fd-810400f3c3b8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.872392 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02f2929-8462-4790-a8fd-810400f3c3b8-kube-api-access-wh77t" (OuterVolumeSpecName: "kube-api-access-wh77t") pod "a02f2929-8462-4790-a8fd-810400f3c3b8" (UID: "a02f2929-8462-4790-a8fd-810400f3c3b8"). InnerVolumeSpecName "kube-api-access-wh77t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.876600 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-scripts" (OuterVolumeSpecName: "scripts") pod "a02f2929-8462-4790-a8fd-810400f3c3b8" (UID: "a02f2929-8462-4790-a8fd-810400f3c3b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.924194 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a02f2929-8462-4790-a8fd-810400f3c3b8" (UID: "a02f2929-8462-4790-a8fd-810400f3c3b8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.936139 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a02f2929-8462-4790-a8fd-810400f3c3b8" (UID: "a02f2929-8462-4790-a8fd-810400f3c3b8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.964255 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a02f2929-8462-4790-a8fd-810400f3c3b8" (UID: "a02f2929-8462-4790-a8fd-810400f3c3b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.967684 4756 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.967892 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh77t\" (UniqueName: \"kubernetes.io/projected/a02f2929-8462-4790-a8fd-810400f3c3b8-kube-api-access-wh77t\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.968105 4756 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a02f2929-8462-4790-a8fd-810400f3c3b8-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.968176 4756 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a02f2929-8462-4790-a8fd-810400f3c3b8-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.968233 4756 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.968297 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:41 crc kubenswrapper[4756]: I1203 11:17:41.968368 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.123378 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-config-data" (OuterVolumeSpecName: "config-data") pod "a02f2929-8462-4790-a8fd-810400f3c3b8" (UID: "a02f2929-8462-4790-a8fd-810400f3c3b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.215769 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a02f2929-8462-4790-a8fd-810400f3c3b8-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.233564 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a02f2929-8462-4790-a8fd-810400f3c3b8","Type":"ContainerDied","Data":"2b88d2950055df502f609a29361e7ee9e06a1decc924a2584c19f3837c99ce36"} Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.233635 4756 scope.go:117] "RemoveContainer" containerID="d809a075f351d5fd48103b97be32171a4941f809454b474267f0a61ec9065ade" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.233805 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.306703 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.329862 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.336672 4756 scope.go:117] "RemoveContainer" containerID="4af7019dd326222cf11c4033e74720d9987b1e883e5fdc66d044a96aaac7e3f7" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.367082 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:42 crc kubenswrapper[4756]: E1203 11:17:42.367828 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="sg-core" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.367854 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="sg-core" Dec 03 11:17:42 crc kubenswrapper[4756]: E1203 11:17:42.367890 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805494b2-fcad-4208-895c-4b2708a9d129" containerName="dnsmasq-dns" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.367901 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="805494b2-fcad-4208-895c-4b2708a9d129" containerName="dnsmasq-dns" Dec 03 11:17:42 crc kubenswrapper[4756]: E1203 11:17:42.367925 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805494b2-fcad-4208-895c-4b2708a9d129" containerName="init" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.367935 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="805494b2-fcad-4208-895c-4b2708a9d129" containerName="init" Dec 03 11:17:42 crc kubenswrapper[4756]: E1203 11:17:42.367976 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="ceilometer-central-agent" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.367986 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="ceilometer-central-agent" Dec 03 11:17:42 crc kubenswrapper[4756]: E1203 11:17:42.368006 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="proxy-httpd" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.368015 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="proxy-httpd" Dec 03 11:17:42 crc kubenswrapper[4756]: E1203 11:17:42.368032 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="ceilometer-notification-agent" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.368041 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="ceilometer-notification-agent" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.368320 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="ceilometer-notification-agent" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.368348 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="sg-core" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.368371 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="proxy-httpd" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.368383 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="805494b2-fcad-4208-895c-4b2708a9d129" containerName="dnsmasq-dns" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.368395 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" containerName="ceilometer-central-agent" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.376700 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.379599 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.380718 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.386491 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.387537 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.399404 4756 scope.go:117] "RemoveContainer" containerID="5e7f58e74200fd56673d0f95901af6735bc23f3c2c75a288ccfaaae1affa7ae8" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.427677 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.427793 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-log-httpd\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.427882 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-scripts\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.427939 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.428060 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-config-data\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.428082 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.428102 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhcbn\" (UniqueName: \"kubernetes.io/projected/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-kube-api-access-fhcbn\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.428153 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-run-httpd\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.435576 4756 scope.go:117] "RemoveContainer" containerID="c10a7a78462cdc3426281080d197a61f860a98af4e168d51dc575399c21277eb" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.530464 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-config-data\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.531090 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.531130 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcbn\" (UniqueName: \"kubernetes.io/projected/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-kube-api-access-fhcbn\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.531171 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-run-httpd\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.531316 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.531347 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-log-httpd\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.531437 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-scripts\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.531475 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.534225 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-run-httpd\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.534576 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-log-httpd\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.536788 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-config-data\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.537002 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.537032 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.538727 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.539976 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-scripts\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.554411 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhcbn\" (UniqueName: \"kubernetes.io/projected/3610ec2a-6a5b-4cae-9863-e5ab6f3267ed-kube-api-access-fhcbn\") pod \"ceilometer-0\" (UID: \"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed\") " pod="openstack/ceilometer-0" Dec 03 11:17:42 crc kubenswrapper[4756]: I1203 11:17:42.713217 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 11:17:43 crc kubenswrapper[4756]: I1203 11:17:43.197148 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 11:17:43 crc kubenswrapper[4756]: W1203 11:17:43.199135 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3610ec2a_6a5b_4cae_9863_e5ab6f3267ed.slice/crio-23630f347b4feb3460c01f03f6341fff5dc72d7b4fcd59a2c9f1cd4c837b5835 WatchSource:0}: Error finding container 23630f347b4feb3460c01f03f6341fff5dc72d7b4fcd59a2c9f1cd4c837b5835: Status 404 returned error can't find the container with id 23630f347b4feb3460c01f03f6341fff5dc72d7b4fcd59a2c9f1cd4c837b5835 Dec 03 11:17:43 crc kubenswrapper[4756]: I1203 11:17:43.250823 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a02f2929-8462-4790-a8fd-810400f3c3b8" path="/var/lib/kubelet/pods/a02f2929-8462-4790-a8fd-810400f3c3b8/volumes" Dec 03 11:17:43 crc kubenswrapper[4756]: I1203 11:17:43.251868 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed","Type":"ContainerStarted","Data":"23630f347b4feb3460c01f03f6341fff5dc72d7b4fcd59a2c9f1cd4c837b5835"} Dec 03 11:17:44 crc kubenswrapper[4756]: I1203 11:17:44.788488 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 11:17:44 crc kubenswrapper[4756]: I1203 11:17:44.788879 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 11:17:45 crc kubenswrapper[4756]: I1203 11:17:45.816362 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="959b66e9-f960-456d-8aeb-f7eb63d55b23" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 11:17:45 crc kubenswrapper[4756]: I1203 11:17:45.816371 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="959b66e9-f960-456d-8aeb-f7eb63d55b23" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 11:17:54 crc kubenswrapper[4756]: I1203 11:17:54.389681 4756 generic.go:334] "Generic (PLEG): container finished" podID="79717969-f844-4ec0-935f-2e2886597684" containerID="6859e503d811c71c3b86193738a8438ff3b2257d38bea4d5194f8be348f8ea63" exitCode=0 Dec 03 11:17:54 crc kubenswrapper[4756]: I1203 11:17:54.389754 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xt2f8" event={"ID":"79717969-f844-4ec0-935f-2e2886597684","Type":"ContainerDied","Data":"6859e503d811c71c3b86193738a8438ff3b2257d38bea4d5194f8be348f8ea63"} Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:54.797673 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:54.798399 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:54.798458 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:54.804329 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:55.400510 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:55.410708 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:55.819935 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:55.919315 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t75kt\" (UniqueName: \"kubernetes.io/projected/79717969-f844-4ec0-935f-2e2886597684-kube-api-access-t75kt\") pod \"79717969-f844-4ec0-935f-2e2886597684\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:55.919792 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-combined-ca-bundle\") pod \"79717969-f844-4ec0-935f-2e2886597684\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:55.920101 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-config-data\") pod \"79717969-f844-4ec0-935f-2e2886597684\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:55.920220 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-scripts\") pod \"79717969-f844-4ec0-935f-2e2886597684\" (UID: \"79717969-f844-4ec0-935f-2e2886597684\") " Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:55.928070 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-scripts" (OuterVolumeSpecName: "scripts") pod "79717969-f844-4ec0-935f-2e2886597684" (UID: "79717969-f844-4ec0-935f-2e2886597684"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:55.929304 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79717969-f844-4ec0-935f-2e2886597684-kube-api-access-t75kt" (OuterVolumeSpecName: "kube-api-access-t75kt") pod "79717969-f844-4ec0-935f-2e2886597684" (UID: "79717969-f844-4ec0-935f-2e2886597684"). InnerVolumeSpecName "kube-api-access-t75kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:55.955708 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79717969-f844-4ec0-935f-2e2886597684" (UID: "79717969-f844-4ec0-935f-2e2886597684"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:55.966564 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-config-data" (OuterVolumeSpecName: "config-data") pod "79717969-f844-4ec0-935f-2e2886597684" (UID: "79717969-f844-4ec0-935f-2e2886597684"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:56.023090 4756 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:56.023145 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t75kt\" (UniqueName: \"kubernetes.io/projected/79717969-f844-4ec0-935f-2e2886597684-kube-api-access-t75kt\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:56.023166 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:56.023178 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79717969-f844-4ec0-935f-2e2886597684-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:56.416592 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xt2f8" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:56.419196 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xt2f8" event={"ID":"79717969-f844-4ec0-935f-2e2886597684","Type":"ContainerDied","Data":"379fdd8cacb1ee5e00da29cc9ab7391e4e9cbc4f5ff212ef2772561227ee611b"} Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:56.419284 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="379fdd8cacb1ee5e00da29cc9ab7391e4e9cbc4f5ff212ef2772561227ee611b" Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:57.070608 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:57.080313 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:57.080570 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="887bec1e-c8a0-490f-a945-d6d8885c0480" containerName="nova-scheduler-scheduler" containerID="cri-o://aa4b73304bf3783fd8db958d2b0d94de07ed9d28d4777bb474368408ad695aca" gracePeriod=30 Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:57.100206 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:57.100557 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ec1943ca-3ada-4b39-884d-a822f2efba3f" containerName="nova-metadata-log" containerID="cri-o://33d6806e90bb90a6348aa77bab5c84df53d837194e01113411db5fa259fa52ae" gracePeriod=30 Dec 03 11:17:57 crc kubenswrapper[4756]: I1203 11:17:57.100767 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ec1943ca-3ada-4b39-884d-a822f2efba3f" containerName="nova-metadata-metadata" containerID="cri-o://50b4c0081b3024d29df37184780ad0737ca0a892d736a9db2456101831563a11" gracePeriod=30 Dec 03 11:17:58 crc kubenswrapper[4756]: I1203 11:17:58.442653 4756 generic.go:334] "Generic (PLEG): container finished" podID="ec1943ca-3ada-4b39-884d-a822f2efba3f" containerID="33d6806e90bb90a6348aa77bab5c84df53d837194e01113411db5fa259fa52ae" exitCode=143 Dec 03 11:17:58 crc kubenswrapper[4756]: I1203 11:17:58.442716 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1943ca-3ada-4b39-884d-a822f2efba3f","Type":"ContainerDied","Data":"33d6806e90bb90a6348aa77bab5c84df53d837194e01113411db5fa259fa52ae"} Dec 03 11:17:58 crc kubenswrapper[4756]: I1203 11:17:58.443343 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="959b66e9-f960-456d-8aeb-f7eb63d55b23" containerName="nova-api-log" containerID="cri-o://72284c79d8ff3b284c6c79e7c1fe10c8d7ce02be8ecc67e8868d635c35c20fa7" gracePeriod=30 Dec 03 11:17:58 crc kubenswrapper[4756]: I1203 11:17:58.443436 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="959b66e9-f960-456d-8aeb-f7eb63d55b23" containerName="nova-api-api" containerID="cri-o://be9f938b5da0a9c0f955e22cb1b0a16dca89a22017090c35aaa0d32968cd541d" gracePeriod=30 Dec 03 11:17:59 crc kubenswrapper[4756]: I1203 11:17:59.456279 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed","Type":"ContainerStarted","Data":"f2427b3a68f927001d62ec53929c55c829ec4ac3e79184eb0eb975a4e9b1d017"} Dec 03 11:17:59 crc kubenswrapper[4756]: I1203 11:17:59.463197 4756 generic.go:334] "Generic (PLEG): container finished" podID="959b66e9-f960-456d-8aeb-f7eb63d55b23" containerID="72284c79d8ff3b284c6c79e7c1fe10c8d7ce02be8ecc67e8868d635c35c20fa7" exitCode=143 Dec 03 11:17:59 crc kubenswrapper[4756]: I1203 11:17:59.463295 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"959b66e9-f960-456d-8aeb-f7eb63d55b23","Type":"ContainerDied","Data":"72284c79d8ff3b284c6c79e7c1fe10c8d7ce02be8ecc67e8868d635c35c20fa7"} Dec 03 11:18:00 crc kubenswrapper[4756]: E1203 11:18:00.013503 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa4b73304bf3783fd8db958d2b0d94de07ed9d28d4777bb474368408ad695aca" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 11:18:00 crc kubenswrapper[4756]: E1203 11:18:00.016118 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa4b73304bf3783fd8db958d2b0d94de07ed9d28d4777bb474368408ad695aca" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 11:18:00 crc kubenswrapper[4756]: E1203 11:18:00.018794 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa4b73304bf3783fd8db958d2b0d94de07ed9d28d4777bb474368408ad695aca" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 11:18:00 crc kubenswrapper[4756]: E1203 11:18:00.018839 4756 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="887bec1e-c8a0-490f-a945-d6d8885c0480" containerName="nova-scheduler-scheduler" Dec 03 11:18:00 crc kubenswrapper[4756]: I1203 11:18:00.273349 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ec1943ca-3ada-4b39-884d-a822f2efba3f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:38188->10.217.0.193:8775: read: connection reset by peer" Dec 03 11:18:00 crc kubenswrapper[4756]: I1203 11:18:00.273389 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ec1943ca-3ada-4b39-884d-a822f2efba3f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:38190->10.217.0.193:8775: read: connection reset by peer" Dec 03 11:18:00 crc kubenswrapper[4756]: I1203 11:18:00.478857 4756 generic.go:334] "Generic (PLEG): container finished" podID="ec1943ca-3ada-4b39-884d-a822f2efba3f" containerID="50b4c0081b3024d29df37184780ad0737ca0a892d736a9db2456101831563a11" exitCode=0 Dec 03 11:18:00 crc kubenswrapper[4756]: I1203 11:18:00.479298 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1943ca-3ada-4b39-884d-a822f2efba3f","Type":"ContainerDied","Data":"50b4c0081b3024d29df37184780ad0737ca0a892d736a9db2456101831563a11"} Dec 03 11:18:00 crc kubenswrapper[4756]: I1203 11:18:00.782804 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 11:18:00 crc kubenswrapper[4756]: I1203 11:18:00.948571 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdmvw\" (UniqueName: \"kubernetes.io/projected/ec1943ca-3ada-4b39-884d-a822f2efba3f-kube-api-access-wdmvw\") pod \"ec1943ca-3ada-4b39-884d-a822f2efba3f\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " Dec 03 11:18:00 crc kubenswrapper[4756]: I1203 11:18:00.948834 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-config-data\") pod \"ec1943ca-3ada-4b39-884d-a822f2efba3f\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " Dec 03 11:18:00 crc kubenswrapper[4756]: I1203 11:18:00.948895 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1943ca-3ada-4b39-884d-a822f2efba3f-logs\") pod \"ec1943ca-3ada-4b39-884d-a822f2efba3f\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " Dec 03 11:18:00 crc kubenswrapper[4756]: I1203 11:18:00.948992 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-nova-metadata-tls-certs\") pod \"ec1943ca-3ada-4b39-884d-a822f2efba3f\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " Dec 03 11:18:00 crc kubenswrapper[4756]: I1203 11:18:00.949027 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-combined-ca-bundle\") pod \"ec1943ca-3ada-4b39-884d-a822f2efba3f\" (UID: \"ec1943ca-3ada-4b39-884d-a822f2efba3f\") " Dec 03 11:18:00 crc kubenswrapper[4756]: I1203 11:18:00.949601 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec1943ca-3ada-4b39-884d-a822f2efba3f-logs" (OuterVolumeSpecName: "logs") pod "ec1943ca-3ada-4b39-884d-a822f2efba3f" (UID: "ec1943ca-3ada-4b39-884d-a822f2efba3f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:18:00 crc kubenswrapper[4756]: I1203 11:18:00.949879 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1943ca-3ada-4b39-884d-a822f2efba3f-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:18:00 crc kubenswrapper[4756]: I1203 11:18:00.959553 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec1943ca-3ada-4b39-884d-a822f2efba3f-kube-api-access-wdmvw" (OuterVolumeSpecName: "kube-api-access-wdmvw") pod "ec1943ca-3ada-4b39-884d-a822f2efba3f" (UID: "ec1943ca-3ada-4b39-884d-a822f2efba3f"). InnerVolumeSpecName "kube-api-access-wdmvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.016659 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec1943ca-3ada-4b39-884d-a822f2efba3f" (UID: "ec1943ca-3ada-4b39-884d-a822f2efba3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.023790 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-config-data" (OuterVolumeSpecName: "config-data") pod "ec1943ca-3ada-4b39-884d-a822f2efba3f" (UID: "ec1943ca-3ada-4b39-884d-a822f2efba3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.040151 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ec1943ca-3ada-4b39-884d-a822f2efba3f" (UID: "ec1943ca-3ada-4b39-884d-a822f2efba3f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.052922 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdmvw\" (UniqueName: \"kubernetes.io/projected/ec1943ca-3ada-4b39-884d-a822f2efba3f-kube-api-access-wdmvw\") on node \"crc\" DevicePath \"\"" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.053040 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.053060 4756 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.053076 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1943ca-3ada-4b39-884d-a822f2efba3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.496928 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed","Type":"ContainerStarted","Data":"fa5102ac3a832925bf029b8b42f99efaaedc711e0c1103ff079938b816953639"} Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.500636 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1943ca-3ada-4b39-884d-a822f2efba3f","Type":"ContainerDied","Data":"4088dadbd6f6e931885ef90b15360eb66d31b0e1475086befffd7df9bab9dfee"} Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.500741 4756 scope.go:117] "RemoveContainer" containerID="50b4c0081b3024d29df37184780ad0737ca0a892d736a9db2456101831563a11" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.501094 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.515290 4756 generic.go:334] "Generic (PLEG): container finished" podID="887bec1e-c8a0-490f-a945-d6d8885c0480" containerID="aa4b73304bf3783fd8db958d2b0d94de07ed9d28d4777bb474368408ad695aca" exitCode=0 Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.515358 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"887bec1e-c8a0-490f-a945-d6d8885c0480","Type":"ContainerDied","Data":"aa4b73304bf3783fd8db958d2b0d94de07ed9d28d4777bb474368408ad695aca"} Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.535026 4756 scope.go:117] "RemoveContainer" containerID="33d6806e90bb90a6348aa77bab5c84df53d837194e01113411db5fa259fa52ae" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.537402 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.567689 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.583072 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:18:01 crc kubenswrapper[4756]: E1203 11:18:01.583793 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1943ca-3ada-4b39-884d-a822f2efba3f" containerName="nova-metadata-metadata" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.583821 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1943ca-3ada-4b39-884d-a822f2efba3f" containerName="nova-metadata-metadata" Dec 03 11:18:01 crc kubenswrapper[4756]: E1203 11:18:01.583843 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1943ca-3ada-4b39-884d-a822f2efba3f" containerName="nova-metadata-log" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.583852 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1943ca-3ada-4b39-884d-a822f2efba3f" containerName="nova-metadata-log" Dec 03 11:18:01 crc kubenswrapper[4756]: E1203 11:18:01.583872 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79717969-f844-4ec0-935f-2e2886597684" containerName="nova-manage" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.583881 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="79717969-f844-4ec0-935f-2e2886597684" containerName="nova-manage" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.584170 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1943ca-3ada-4b39-884d-a822f2efba3f" containerName="nova-metadata-metadata" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.584199 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="79717969-f844-4ec0-935f-2e2886597684" containerName="nova-manage" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.584234 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1943ca-3ada-4b39-884d-a822f2efba3f" containerName="nova-metadata-log" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.585773 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.588979 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.595133 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.599419 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.669020 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae26896-15a3-48e5-b96a-136209092056-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.669431 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnts4\" (UniqueName: \"kubernetes.io/projected/aae26896-15a3-48e5-b96a-136209092056-kube-api-access-pnts4\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.669524 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aae26896-15a3-48e5-b96a-136209092056-logs\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.669746 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae26896-15a3-48e5-b96a-136209092056-config-data\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.670351 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae26896-15a3-48e5-b96a-136209092056-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.773204 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae26896-15a3-48e5-b96a-136209092056-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.773312 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae26896-15a3-48e5-b96a-136209092056-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.773356 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnts4\" (UniqueName: \"kubernetes.io/projected/aae26896-15a3-48e5-b96a-136209092056-kube-api-access-pnts4\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.773379 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aae26896-15a3-48e5-b96a-136209092056-logs\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.773919 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae26896-15a3-48e5-b96a-136209092056-config-data\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.774054 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aae26896-15a3-48e5-b96a-136209092056-logs\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.780930 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae26896-15a3-48e5-b96a-136209092056-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.784079 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae26896-15a3-48e5-b96a-136209092056-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.790917 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae26896-15a3-48e5-b96a-136209092056-config-data\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.802101 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnts4\" (UniqueName: \"kubernetes.io/projected/aae26896-15a3-48e5-b96a-136209092056-kube-api-access-pnts4\") pod \"nova-metadata-0\" (UID: \"aae26896-15a3-48e5-b96a-136209092056\") " pod="openstack/nova-metadata-0" Dec 03 11:18:01 crc kubenswrapper[4756]: I1203 11:18:01.943500 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.117531 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.286236 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887bec1e-c8a0-490f-a945-d6d8885c0480-combined-ca-bundle\") pod \"887bec1e-c8a0-490f-a945-d6d8885c0480\" (UID: \"887bec1e-c8a0-490f-a945-d6d8885c0480\") " Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.286766 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tjmm\" (UniqueName: \"kubernetes.io/projected/887bec1e-c8a0-490f-a945-d6d8885c0480-kube-api-access-9tjmm\") pod \"887bec1e-c8a0-490f-a945-d6d8885c0480\" (UID: \"887bec1e-c8a0-490f-a945-d6d8885c0480\") " Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.287192 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887bec1e-c8a0-490f-a945-d6d8885c0480-config-data\") pod \"887bec1e-c8a0-490f-a945-d6d8885c0480\" (UID: \"887bec1e-c8a0-490f-a945-d6d8885c0480\") " Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.298260 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/887bec1e-c8a0-490f-a945-d6d8885c0480-kube-api-access-9tjmm" (OuterVolumeSpecName: "kube-api-access-9tjmm") pod "887bec1e-c8a0-490f-a945-d6d8885c0480" (UID: "887bec1e-c8a0-490f-a945-d6d8885c0480"). InnerVolumeSpecName "kube-api-access-9tjmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.349079 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887bec1e-c8a0-490f-a945-d6d8885c0480-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "887bec1e-c8a0-490f-a945-d6d8885c0480" (UID: "887bec1e-c8a0-490f-a945-d6d8885c0480"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.367659 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887bec1e-c8a0-490f-a945-d6d8885c0480-config-data" (OuterVolumeSpecName: "config-data") pod "887bec1e-c8a0-490f-a945-d6d8885c0480" (UID: "887bec1e-c8a0-490f-a945-d6d8885c0480"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.400456 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887bec1e-c8a0-490f-a945-d6d8885c0480-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.400505 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tjmm\" (UniqueName: \"kubernetes.io/projected/887bec1e-c8a0-490f-a945-d6d8885c0480-kube-api-access-9tjmm\") on node \"crc\" DevicePath \"\"" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.400522 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887bec1e-c8a0-490f-a945-d6d8885c0480-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.535364 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"887bec1e-c8a0-490f-a945-d6d8885c0480","Type":"ContainerDied","Data":"0f217f423454d2c788e08134c641a2ee8eaac9dc7ec9d6dd930db360fc69d2c8"} Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.535460 4756 scope.go:117] "RemoveContainer" containerID="aa4b73304bf3783fd8db958d2b0d94de07ed9d28d4777bb474368408ad695aca" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.535943 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.540369 4756 generic.go:334] "Generic (PLEG): container finished" podID="959b66e9-f960-456d-8aeb-f7eb63d55b23" containerID="be9f938b5da0a9c0f955e22cb1b0a16dca89a22017090c35aaa0d32968cd541d" exitCode=0 Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.540477 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"959b66e9-f960-456d-8aeb-f7eb63d55b23","Type":"ContainerDied","Data":"be9f938b5da0a9c0f955e22cb1b0a16dca89a22017090c35aaa0d32968cd541d"} Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.540515 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"959b66e9-f960-456d-8aeb-f7eb63d55b23","Type":"ContainerDied","Data":"74cac8b8ae3b97074c1cb9df4acc97404a5e4160bdb19500a64ada24e76edbd9"} Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.540533 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74cac8b8ae3b97074c1cb9df4acc97404a5e4160bdb19500a64ada24e76edbd9" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.544107 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed","Type":"ContainerStarted","Data":"dce6c6f81df5825babc8b21a22b8e50befcbb1edb1aa4c2af3b450414091ddcc"} Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.574970 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.604528 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.622605 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.640642 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:18:02 crc kubenswrapper[4756]: E1203 11:18:02.641427 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959b66e9-f960-456d-8aeb-f7eb63d55b23" containerName="nova-api-api" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.641457 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="959b66e9-f960-456d-8aeb-f7eb63d55b23" containerName="nova-api-api" Dec 03 11:18:02 crc kubenswrapper[4756]: E1203 11:18:02.641493 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959b66e9-f960-456d-8aeb-f7eb63d55b23" containerName="nova-api-log" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.641503 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="959b66e9-f960-456d-8aeb-f7eb63d55b23" containerName="nova-api-log" Dec 03 11:18:02 crc kubenswrapper[4756]: E1203 11:18:02.641516 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887bec1e-c8a0-490f-a945-d6d8885c0480" containerName="nova-scheduler-scheduler" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.641524 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="887bec1e-c8a0-490f-a945-d6d8885c0480" containerName="nova-scheduler-scheduler" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.641820 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="959b66e9-f960-456d-8aeb-f7eb63d55b23" containerName="nova-api-log" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.641843 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="887bec1e-c8a0-490f-a945-d6d8885c0480" containerName="nova-scheduler-scheduler" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.641855 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="959b66e9-f960-456d-8aeb-f7eb63d55b23" containerName="nova-api-api" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.642938 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.648112 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.681799 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.707642 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/959b66e9-f960-456d-8aeb-f7eb63d55b23-logs\") pod \"959b66e9-f960-456d-8aeb-f7eb63d55b23\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.707903 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-combined-ca-bundle\") pod \"959b66e9-f960-456d-8aeb-f7eb63d55b23\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.708002 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-internal-tls-certs\") pod \"959b66e9-f960-456d-8aeb-f7eb63d55b23\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.708144 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-public-tls-certs\") pod \"959b66e9-f960-456d-8aeb-f7eb63d55b23\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.708221 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-config-data\") pod \"959b66e9-f960-456d-8aeb-f7eb63d55b23\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.708390 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpf49\" (UniqueName: \"kubernetes.io/projected/959b66e9-f960-456d-8aeb-f7eb63d55b23-kube-api-access-zpf49\") pod \"959b66e9-f960-456d-8aeb-f7eb63d55b23\" (UID: \"959b66e9-f960-456d-8aeb-f7eb63d55b23\") " Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.710631 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/959b66e9-f960-456d-8aeb-f7eb63d55b23-logs" (OuterVolumeSpecName: "logs") pod "959b66e9-f960-456d-8aeb-f7eb63d55b23" (UID: "959b66e9-f960-456d-8aeb-f7eb63d55b23"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.729494 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/959b66e9-f960-456d-8aeb-f7eb63d55b23-kube-api-access-zpf49" (OuterVolumeSpecName: "kube-api-access-zpf49") pod "959b66e9-f960-456d-8aeb-f7eb63d55b23" (UID: "959b66e9-f960-456d-8aeb-f7eb63d55b23"). InnerVolumeSpecName "kube-api-access-zpf49". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.751177 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.767391 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-config-data" (OuterVolumeSpecName: "config-data") pod "959b66e9-f960-456d-8aeb-f7eb63d55b23" (UID: "959b66e9-f960-456d-8aeb-f7eb63d55b23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.771868 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "959b66e9-f960-456d-8aeb-f7eb63d55b23" (UID: "959b66e9-f960-456d-8aeb-f7eb63d55b23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:18:02 crc kubenswrapper[4756]: W1203 11:18:02.776852 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaae26896_15a3_48e5_b96a_136209092056.slice/crio-a5201ced5bd190babdb31d8189c9e251b1873bf1a3f5780899304d2500d65612 WatchSource:0}: Error finding container a5201ced5bd190babdb31d8189c9e251b1873bf1a3f5780899304d2500d65612: Status 404 returned error can't find the container with id a5201ced5bd190babdb31d8189c9e251b1873bf1a3f5780899304d2500d65612 Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.817003 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ce8020-3f9b-4085-9f66-c05f682cde05-config-data\") pod \"nova-scheduler-0\" (UID: \"50ce8020-3f9b-4085-9f66-c05f682cde05\") " pod="openstack/nova-scheduler-0" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.817601 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zspr4\" (UniqueName: \"kubernetes.io/projected/50ce8020-3f9b-4085-9f66-c05f682cde05-kube-api-access-zspr4\") pod \"nova-scheduler-0\" (UID: \"50ce8020-3f9b-4085-9f66-c05f682cde05\") " pod="openstack/nova-scheduler-0" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.817933 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce8020-3f9b-4085-9f66-c05f682cde05-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50ce8020-3f9b-4085-9f66-c05f682cde05\") " pod="openstack/nova-scheduler-0" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.818385 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpf49\" (UniqueName: \"kubernetes.io/projected/959b66e9-f960-456d-8aeb-f7eb63d55b23-kube-api-access-zpf49\") on node \"crc\" DevicePath \"\"" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.818404 4756 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/959b66e9-f960-456d-8aeb-f7eb63d55b23-logs\") on node \"crc\" DevicePath \"\"" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.818416 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.818426 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.827479 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "959b66e9-f960-456d-8aeb-f7eb63d55b23" (UID: "959b66e9-f960-456d-8aeb-f7eb63d55b23"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.833412 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "959b66e9-f960-456d-8aeb-f7eb63d55b23" (UID: "959b66e9-f960-456d-8aeb-f7eb63d55b23"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.922006 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ce8020-3f9b-4085-9f66-c05f682cde05-config-data\") pod \"nova-scheduler-0\" (UID: \"50ce8020-3f9b-4085-9f66-c05f682cde05\") " pod="openstack/nova-scheduler-0" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.922129 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zspr4\" (UniqueName: \"kubernetes.io/projected/50ce8020-3f9b-4085-9f66-c05f682cde05-kube-api-access-zspr4\") pod \"nova-scheduler-0\" (UID: \"50ce8020-3f9b-4085-9f66-c05f682cde05\") " pod="openstack/nova-scheduler-0" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.922452 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce8020-3f9b-4085-9f66-c05f682cde05-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50ce8020-3f9b-4085-9f66-c05f682cde05\") " pod="openstack/nova-scheduler-0" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.922687 4756 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.922706 4756 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/959b66e9-f960-456d-8aeb-f7eb63d55b23-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.940709 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ce8020-3f9b-4085-9f66-c05f682cde05-config-data\") pod \"nova-scheduler-0\" (UID: \"50ce8020-3f9b-4085-9f66-c05f682cde05\") " pod="openstack/nova-scheduler-0" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.944143 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce8020-3f9b-4085-9f66-c05f682cde05-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50ce8020-3f9b-4085-9f66-c05f682cde05\") " pod="openstack/nova-scheduler-0" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.950286 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zspr4\" (UniqueName: \"kubernetes.io/projected/50ce8020-3f9b-4085-9f66-c05f682cde05-kube-api-access-zspr4\") pod \"nova-scheduler-0\" (UID: \"50ce8020-3f9b-4085-9f66-c05f682cde05\") " pod="openstack/nova-scheduler-0" Dec 03 11:18:02 crc kubenswrapper[4756]: I1203 11:18:02.994094 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.277277 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="887bec1e-c8a0-490f-a945-d6d8885c0480" path="/var/lib/kubelet/pods/887bec1e-c8a0-490f-a945-d6d8885c0480/volumes" Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.278493 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec1943ca-3ada-4b39-884d-a822f2efba3f" path="/var/lib/kubelet/pods/ec1943ca-3ada-4b39-884d-a822f2efba3f/volumes" Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.533204 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.564717 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aae26896-15a3-48e5-b96a-136209092056","Type":"ContainerStarted","Data":"c8322a8893a1d301451a5621375354f885b0ff94e5e6afc7a1166af1be083ae8"} Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.565216 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aae26896-15a3-48e5-b96a-136209092056","Type":"ContainerStarted","Data":"8e9477547eea49fe349bcaf86ef16ea1501216bef6e4208542dbff62b36c6659"} Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.565237 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aae26896-15a3-48e5-b96a-136209092056","Type":"ContainerStarted","Data":"a5201ced5bd190babdb31d8189c9e251b1873bf1a3f5780899304d2500d65612"} Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.568902 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.598877 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.598853097 podStartE2EDuration="2.598853097s" podCreationTimestamp="2025-12-03 11:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:18:03.591522138 +0000 UTC m=+1494.621523402" watchObservedRunningTime="2025-12-03 11:18:03.598853097 +0000 UTC m=+1494.628854341" Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.819036 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.862754 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.876142 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.878717 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.883142 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.884171 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.884347 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.888075 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.970942 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c016bc8-281b-4dcc-9475-9c373175f026-public-tls-certs\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.971045 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c016bc8-281b-4dcc-9475-9c373175f026-logs\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.971096 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c016bc8-281b-4dcc-9475-9c373175f026-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.971121 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c016bc8-281b-4dcc-9475-9c373175f026-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.971175 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c016bc8-281b-4dcc-9475-9c373175f026-config-data\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:03 crc kubenswrapper[4756]: I1203 11:18:03.971200 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2pfg\" (UniqueName: \"kubernetes.io/projected/9c016bc8-281b-4dcc-9475-9c373175f026-kube-api-access-n2pfg\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.073585 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c016bc8-281b-4dcc-9475-9c373175f026-public-tls-certs\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.073665 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c016bc8-281b-4dcc-9475-9c373175f026-logs\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.073706 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c016bc8-281b-4dcc-9475-9c373175f026-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.073724 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c016bc8-281b-4dcc-9475-9c373175f026-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.073764 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c016bc8-281b-4dcc-9475-9c373175f026-config-data\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.073784 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2pfg\" (UniqueName: \"kubernetes.io/projected/9c016bc8-281b-4dcc-9475-9c373175f026-kube-api-access-n2pfg\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.074834 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c016bc8-281b-4dcc-9475-9c373175f026-logs\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.080821 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c016bc8-281b-4dcc-9475-9c373175f026-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.081010 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c016bc8-281b-4dcc-9475-9c373175f026-config-data\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.081696 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c016bc8-281b-4dcc-9475-9c373175f026-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.085039 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c016bc8-281b-4dcc-9475-9c373175f026-public-tls-certs\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.096284 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2pfg\" (UniqueName: \"kubernetes.io/projected/9c016bc8-281b-4dcc-9475-9c373175f026-kube-api-access-n2pfg\") pod \"nova-api-0\" (UID: \"9c016bc8-281b-4dcc-9475-9c373175f026\") " pod="openstack/nova-api-0" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.213129 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.583940 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50ce8020-3f9b-4085-9f66-c05f682cde05","Type":"ContainerStarted","Data":"901db7825540704c7087e9e747e106c55bd9d5b7ed7f029e02bd4c97a759111f"} Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.584402 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50ce8020-3f9b-4085-9f66-c05f682cde05","Type":"ContainerStarted","Data":"0c8a8381041b4026ec50b45736d36982cdfe9627568a75558584f1c93b7b0dc7"} Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.592145 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3610ec2a-6a5b-4cae-9863-e5ab6f3267ed","Type":"ContainerStarted","Data":"126cf6465f7de0a0835be2095ae4602187ebb4b3f7d9c2729407e20c55b8fa79"} Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.615396 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.615364844 podStartE2EDuration="2.615364844s" podCreationTimestamp="2025-12-03 11:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:18:04.60464113 +0000 UTC m=+1495.634642374" watchObservedRunningTime="2025-12-03 11:18:04.615364844 +0000 UTC m=+1495.645366088" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.635325 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6820009799999998 podStartE2EDuration="22.635298775s" podCreationTimestamp="2025-12-03 11:17:42 +0000 UTC" firstStartedPulling="2025-12-03 11:17:43.20364445 +0000 UTC m=+1474.233645704" lastFinishedPulling="2025-12-03 11:18:03.156942255 +0000 UTC m=+1494.186943499" observedRunningTime="2025-12-03 11:18:04.629306898 +0000 UTC m=+1495.659308162" watchObservedRunningTime="2025-12-03 11:18:04.635298775 +0000 UTC m=+1495.665300019" Dec 03 11:18:04 crc kubenswrapper[4756]: I1203 11:18:04.709071 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 11:18:04 crc kubenswrapper[4756]: W1203 11:18:04.714670 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c016bc8_281b_4dcc_9475_9c373175f026.slice/crio-ead8b9ec74cbef238ac8220eaddc5fe2ee9fb98574c301d04eb47cbe2cab4f79 WatchSource:0}: Error finding container ead8b9ec74cbef238ac8220eaddc5fe2ee9fb98574c301d04eb47cbe2cab4f79: Status 404 returned error can't find the container with id ead8b9ec74cbef238ac8220eaddc5fe2ee9fb98574c301d04eb47cbe2cab4f79 Dec 03 11:18:05 crc kubenswrapper[4756]: I1203 11:18:05.257430 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="959b66e9-f960-456d-8aeb-f7eb63d55b23" path="/var/lib/kubelet/pods/959b66e9-f960-456d-8aeb-f7eb63d55b23/volumes" Dec 03 11:18:05 crc kubenswrapper[4756]: I1203 11:18:05.610549 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c016bc8-281b-4dcc-9475-9c373175f026","Type":"ContainerStarted","Data":"7a46aa537c3b473e9fa285ba117779559192c64bfba095076dd8d1b86e07c428"} Dec 03 11:18:05 crc kubenswrapper[4756]: I1203 11:18:05.611895 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c016bc8-281b-4dcc-9475-9c373175f026","Type":"ContainerStarted","Data":"128f858df125f57e41f08aa6ab41ef9d2bd03e2ee2f808cc0d38f8489fb9a8cf"} Dec 03 11:18:05 crc kubenswrapper[4756]: I1203 11:18:05.612129 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c016bc8-281b-4dcc-9475-9c373175f026","Type":"ContainerStarted","Data":"ead8b9ec74cbef238ac8220eaddc5fe2ee9fb98574c301d04eb47cbe2cab4f79"} Dec 03 11:18:05 crc kubenswrapper[4756]: I1203 11:18:05.612266 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 11:18:05 crc kubenswrapper[4756]: I1203 11:18:05.650653 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.650631404 podStartE2EDuration="2.650631404s" podCreationTimestamp="2025-12-03 11:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:18:05.632358815 +0000 UTC m=+1496.662360069" watchObservedRunningTime="2025-12-03 11:18:05.650631404 +0000 UTC m=+1496.680632658" Dec 03 11:18:06 crc kubenswrapper[4756]: I1203 11:18:06.945461 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 11:18:06 crc kubenswrapper[4756]: I1203 11:18:06.946084 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 11:18:07 crc kubenswrapper[4756]: I1203 11:18:07.997151 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 11:18:11 crc kubenswrapper[4756]: I1203 11:18:11.944323 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 11:18:11 crc kubenswrapper[4756]: I1203 11:18:11.945256 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 11:18:12 crc kubenswrapper[4756]: I1203 11:18:12.960607 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aae26896-15a3-48e5-b96a-136209092056" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 11:18:12 crc kubenswrapper[4756]: I1203 11:18:12.960648 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aae26896-15a3-48e5-b96a-136209092056" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 11:18:12 crc kubenswrapper[4756]: I1203 11:18:12.994429 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 11:18:13 crc kubenswrapper[4756]: I1203 11:18:13.031635 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 11:18:13 crc kubenswrapper[4756]: I1203 11:18:13.793239 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 11:18:14 crc kubenswrapper[4756]: I1203 11:18:14.214812 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 11:18:14 crc kubenswrapper[4756]: I1203 11:18:14.215166 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 11:18:15 crc kubenswrapper[4756]: I1203 11:18:15.234251 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9c016bc8-281b-4dcc-9475-9c373175f026" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 11:18:15 crc kubenswrapper[4756]: I1203 11:18:15.234362 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9c016bc8-281b-4dcc-9475-9c373175f026" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 11:18:21 crc kubenswrapper[4756]: I1203 11:18:21.951066 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 11:18:21 crc kubenswrapper[4756]: I1203 11:18:21.955155 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 11:18:21 crc kubenswrapper[4756]: I1203 11:18:21.962265 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 11:18:22 crc kubenswrapper[4756]: I1203 11:18:22.607345 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:18:22 crc kubenswrapper[4756]: I1203 11:18:22.607438 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:18:22 crc kubenswrapper[4756]: I1203 11:18:22.879555 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 11:18:24 crc kubenswrapper[4756]: I1203 11:18:24.221540 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 11:18:24 crc kubenswrapper[4756]: I1203 11:18:24.222238 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 11:18:24 crc kubenswrapper[4756]: I1203 11:18:24.222278 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 11:18:24 crc kubenswrapper[4756]: I1203 11:18:24.231187 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 11:18:24 crc kubenswrapper[4756]: I1203 11:18:24.891008 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 11:18:24 crc kubenswrapper[4756]: I1203 11:18:24.898034 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 11:18:42 crc kubenswrapper[4756]: I1203 11:18:42.751676 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 03 11:18:52 crc kubenswrapper[4756]: I1203 11:18:52.607299 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:18:52 crc kubenswrapper[4756]: I1203 11:18:52.608293 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:18:54 crc kubenswrapper[4756]: I1203 11:18:54.051073 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 11:18:55 crc kubenswrapper[4756]: I1203 11:18:55.975419 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 11:18:59 crc kubenswrapper[4756]: I1203 11:18:59.890410 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b8770c2d-c514-44e9-99d6-c8713f7f9ab1" containerName="rabbitmq" containerID="cri-o://8b3dba8f90b2b5070b229cdaa84213a06e7fb07d3de9316e60d100e63611eaf1" gracePeriod=604795 Dec 03 11:19:01 crc kubenswrapper[4756]: I1203 11:19:01.524214 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="4570f01f-6639-41a5-9201-c49ed4fdefa8" containerName="rabbitmq" containerID="cri-o://7bd53bf9da974a60581db6630433e3a645a5cc731966117193402da4864891ca" gracePeriod=604795 Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.886931 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.981167 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-plugins-conf\") pod \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.981746 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-server-conf\") pod \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.981784 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-erlang-cookie\") pod \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.981836 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-tls\") pod \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.981909 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-erlang-cookie-secret\") pod \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.981938 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-config-data\") pod \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.982535 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b8770c2d-c514-44e9-99d6-c8713f7f9ab1" (UID: "b8770c2d-c514-44e9-99d6-c8713f7f9ab1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.982686 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b8770c2d-c514-44e9-99d6-c8713f7f9ab1" (UID: "b8770c2d-c514-44e9-99d6-c8713f7f9ab1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.982866 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-pod-info\") pod \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.982908 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-plugins\") pod \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.982995 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-confd\") pod \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.983026 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.983165 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tdwd\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-kube-api-access-2tdwd\") pod \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\" (UID: \"b8770c2d-c514-44e9-99d6-c8713f7f9ab1\") " Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.987551 4756 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.987590 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:06 crc kubenswrapper[4756]: I1203 11:19:06.995857 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b8770c2d-c514-44e9-99d6-c8713f7f9ab1" (UID: "b8770c2d-c514-44e9-99d6-c8713f7f9ab1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.014438 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b8770c2d-c514-44e9-99d6-c8713f7f9ab1" (UID: "b8770c2d-c514-44e9-99d6-c8713f7f9ab1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.020896 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b8770c2d-c514-44e9-99d6-c8713f7f9ab1" (UID: "b8770c2d-c514-44e9-99d6-c8713f7f9ab1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.022160 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-kube-api-access-2tdwd" (OuterVolumeSpecName: "kube-api-access-2tdwd") pod "b8770c2d-c514-44e9-99d6-c8713f7f9ab1" (UID: "b8770c2d-c514-44e9-99d6-c8713f7f9ab1"). InnerVolumeSpecName "kube-api-access-2tdwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.030776 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "b8770c2d-c514-44e9-99d6-c8713f7f9ab1" (UID: "b8770c2d-c514-44e9-99d6-c8713f7f9ab1"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.047154 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-pod-info" (OuterVolumeSpecName: "pod-info") pod "b8770c2d-c514-44e9-99d6-c8713f7f9ab1" (UID: "b8770c2d-c514-44e9-99d6-c8713f7f9ab1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.077636 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-config-data" (OuterVolumeSpecName: "config-data") pod "b8770c2d-c514-44e9-99d6-c8713f7f9ab1" (UID: "b8770c2d-c514-44e9-99d6-c8713f7f9ab1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.094433 4756 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-pod-info\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.094478 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.094512 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.094524 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tdwd\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-kube-api-access-2tdwd\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.094536 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.094549 4756 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.094564 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.130595 4756 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.137254 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-server-conf" (OuterVolumeSpecName: "server-conf") pod "b8770c2d-c514-44e9-99d6-c8713f7f9ab1" (UID: "b8770c2d-c514-44e9-99d6-c8713f7f9ab1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.197449 4756 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-server-conf\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.197527 4756 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.217580 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b8770c2d-c514-44e9-99d6-c8713f7f9ab1" (UID: "b8770c2d-c514-44e9-99d6-c8713f7f9ab1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.299739 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8770c2d-c514-44e9-99d6-c8713f7f9ab1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.345436 4756 generic.go:334] "Generic (PLEG): container finished" podID="b8770c2d-c514-44e9-99d6-c8713f7f9ab1" containerID="8b3dba8f90b2b5070b229cdaa84213a06e7fb07d3de9316e60d100e63611eaf1" exitCode=0 Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.345492 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b8770c2d-c514-44e9-99d6-c8713f7f9ab1","Type":"ContainerDied","Data":"8b3dba8f90b2b5070b229cdaa84213a06e7fb07d3de9316e60d100e63611eaf1"} Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.345522 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b8770c2d-c514-44e9-99d6-c8713f7f9ab1","Type":"ContainerDied","Data":"ce20b540430634554e7fc9e5376c657949d69fbe4a6473d12267c8fd1fdc3570"} Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.345542 4756 scope.go:117] "RemoveContainer" containerID="8b3dba8f90b2b5070b229cdaa84213a06e7fb07d3de9316e60d100e63611eaf1" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.345539 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.391039 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.395007 4756 scope.go:117] "RemoveContainer" containerID="94249f4558f22695e46c685db1ad9481d55734fa5f8731fab5c3968bd41e0d49" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.405799 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.433413 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 11:19:07 crc kubenswrapper[4756]: E1203 11:19:07.434055 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8770c2d-c514-44e9-99d6-c8713f7f9ab1" containerName="rabbitmq" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.434081 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8770c2d-c514-44e9-99d6-c8713f7f9ab1" containerName="rabbitmq" Dec 03 11:19:07 crc kubenswrapper[4756]: E1203 11:19:07.434137 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8770c2d-c514-44e9-99d6-c8713f7f9ab1" containerName="setup-container" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.434144 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8770c2d-c514-44e9-99d6-c8713f7f9ab1" containerName="setup-container" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.434378 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8770c2d-c514-44e9-99d6-c8713f7f9ab1" containerName="rabbitmq" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.435911 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.440185 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.441048 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.441382 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6gvvv" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.441696 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.441977 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.442377 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.442390 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.443736 4756 scope.go:117] "RemoveContainer" containerID="8b3dba8f90b2b5070b229cdaa84213a06e7fb07d3de9316e60d100e63611eaf1" Dec 03 11:19:07 crc kubenswrapper[4756]: E1203 11:19:07.444910 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b3dba8f90b2b5070b229cdaa84213a06e7fb07d3de9316e60d100e63611eaf1\": container with ID starting with 8b3dba8f90b2b5070b229cdaa84213a06e7fb07d3de9316e60d100e63611eaf1 not found: ID does not exist" containerID="8b3dba8f90b2b5070b229cdaa84213a06e7fb07d3de9316e60d100e63611eaf1" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.445088 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3dba8f90b2b5070b229cdaa84213a06e7fb07d3de9316e60d100e63611eaf1"} err="failed to get container status \"8b3dba8f90b2b5070b229cdaa84213a06e7fb07d3de9316e60d100e63611eaf1\": rpc error: code = NotFound desc = could not find container \"8b3dba8f90b2b5070b229cdaa84213a06e7fb07d3de9316e60d100e63611eaf1\": container with ID starting with 8b3dba8f90b2b5070b229cdaa84213a06e7fb07d3de9316e60d100e63611eaf1 not found: ID does not exist" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.445231 4756 scope.go:117] "RemoveContainer" containerID="94249f4558f22695e46c685db1ad9481d55734fa5f8731fab5c3968bd41e0d49" Dec 03 11:19:07 crc kubenswrapper[4756]: E1203 11:19:07.445682 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94249f4558f22695e46c685db1ad9481d55734fa5f8731fab5c3968bd41e0d49\": container with ID starting with 94249f4558f22695e46c685db1ad9481d55734fa5f8731fab5c3968bd41e0d49 not found: ID does not exist" containerID="94249f4558f22695e46c685db1ad9481d55734fa5f8731fab5c3968bd41e0d49" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.445790 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94249f4558f22695e46c685db1ad9481d55734fa5f8731fab5c3968bd41e0d49"} err="failed to get container status \"94249f4558f22695e46c685db1ad9481d55734fa5f8731fab5c3968bd41e0d49\": rpc error: code = NotFound desc = could not find container \"94249f4558f22695e46c685db1ad9481d55734fa5f8731fab5c3968bd41e0d49\": container with ID starting with 94249f4558f22695e46c685db1ad9481d55734fa5f8731fab5c3968bd41e0d49 not found: ID does not exist" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.486663 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.503697 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a347e9e2-376b-44ac-92de-25736c30ec1e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.503778 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a347e9e2-376b-44ac-92de-25736c30ec1e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.503825 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spx5t\" (UniqueName: \"kubernetes.io/projected/a347e9e2-376b-44ac-92de-25736c30ec1e-kube-api-access-spx5t\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.503860 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a347e9e2-376b-44ac-92de-25736c30ec1e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.503884 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.503935 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a347e9e2-376b-44ac-92de-25736c30ec1e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.503992 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a347e9e2-376b-44ac-92de-25736c30ec1e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.504037 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a347e9e2-376b-44ac-92de-25736c30ec1e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.504078 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a347e9e2-376b-44ac-92de-25736c30ec1e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.504367 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a347e9e2-376b-44ac-92de-25736c30ec1e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.504473 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a347e9e2-376b-44ac-92de-25736c30ec1e-config-data\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.607086 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a347e9e2-376b-44ac-92de-25736c30ec1e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.607169 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a347e9e2-376b-44ac-92de-25736c30ec1e-config-data\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.607263 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a347e9e2-376b-44ac-92de-25736c30ec1e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.607382 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a347e9e2-376b-44ac-92de-25736c30ec1e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.607438 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spx5t\" (UniqueName: \"kubernetes.io/projected/a347e9e2-376b-44ac-92de-25736c30ec1e-kube-api-access-spx5t\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.607515 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a347e9e2-376b-44ac-92de-25736c30ec1e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.607546 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.607676 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a347e9e2-376b-44ac-92de-25736c30ec1e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.607722 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a347e9e2-376b-44ac-92de-25736c30ec1e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.607802 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a347e9e2-376b-44ac-92de-25736c30ec1e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.607909 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a347e9e2-376b-44ac-92de-25736c30ec1e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.608277 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="4570f01f-6639-41a5-9201-c49ed4fdefa8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.608740 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.609915 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a347e9e2-376b-44ac-92de-25736c30ec1e-config-data\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.609432 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a347e9e2-376b-44ac-92de-25736c30ec1e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.614315 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a347e9e2-376b-44ac-92de-25736c30ec1e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.621908 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a347e9e2-376b-44ac-92de-25736c30ec1e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.623098 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a347e9e2-376b-44ac-92de-25736c30ec1e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.625687 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a347e9e2-376b-44ac-92de-25736c30ec1e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.654468 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a347e9e2-376b-44ac-92de-25736c30ec1e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.654624 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a347e9e2-376b-44ac-92de-25736c30ec1e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.655033 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a347e9e2-376b-44ac-92de-25736c30ec1e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.667835 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spx5t\" (UniqueName: \"kubernetes.io/projected/a347e9e2-376b-44ac-92de-25736c30ec1e-kube-api-access-spx5t\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.692301 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"a347e9e2-376b-44ac-92de-25736c30ec1e\") " pod="openstack/rabbitmq-server-0" Dec 03 11:19:07 crc kubenswrapper[4756]: I1203 11:19:07.824756 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.281694 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.327452 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4570f01f-6639-41a5-9201-c49ed4fdefa8-erlang-cookie-secret\") pod \"4570f01f-6639-41a5-9201-c49ed4fdefa8\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.327511 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-plugins-conf\") pod \"4570f01f-6639-41a5-9201-c49ed4fdefa8\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.327567 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4570f01f-6639-41a5-9201-c49ed4fdefa8-pod-info\") pod \"4570f01f-6639-41a5-9201-c49ed4fdefa8\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.327706 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-erlang-cookie\") pod \"4570f01f-6639-41a5-9201-c49ed4fdefa8\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.327738 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-config-data\") pod \"4570f01f-6639-41a5-9201-c49ed4fdefa8\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.327764 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-tls\") pod \"4570f01f-6639-41a5-9201-c49ed4fdefa8\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.327810 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"4570f01f-6639-41a5-9201-c49ed4fdefa8\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.327862 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-plugins\") pod \"4570f01f-6639-41a5-9201-c49ed4fdefa8\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.327908 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-confd\") pod \"4570f01f-6639-41a5-9201-c49ed4fdefa8\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.328339 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp5fx\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-kube-api-access-pp5fx\") pod \"4570f01f-6639-41a5-9201-c49ed4fdefa8\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.328374 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-server-conf\") pod \"4570f01f-6639-41a5-9201-c49ed4fdefa8\" (UID: \"4570f01f-6639-41a5-9201-c49ed4fdefa8\") " Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.334866 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4570f01f-6639-41a5-9201-c49ed4fdefa8" (UID: "4570f01f-6639-41a5-9201-c49ed4fdefa8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.339450 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4570f01f-6639-41a5-9201-c49ed4fdefa8" (UID: "4570f01f-6639-41a5-9201-c49ed4fdefa8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.340499 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4570f01f-6639-41a5-9201-c49ed4fdefa8" (UID: "4570f01f-6639-41a5-9201-c49ed4fdefa8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.342220 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4570f01f-6639-41a5-9201-c49ed4fdefa8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4570f01f-6639-41a5-9201-c49ed4fdefa8" (UID: "4570f01f-6639-41a5-9201-c49ed4fdefa8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.342286 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4570f01f-6639-41a5-9201-c49ed4fdefa8" (UID: "4570f01f-6639-41a5-9201-c49ed4fdefa8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.342699 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "4570f01f-6639-41a5-9201-c49ed4fdefa8" (UID: "4570f01f-6639-41a5-9201-c49ed4fdefa8"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.351478 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4570f01f-6639-41a5-9201-c49ed4fdefa8-pod-info" (OuterVolumeSpecName: "pod-info") pod "4570f01f-6639-41a5-9201-c49ed4fdefa8" (UID: "4570f01f-6639-41a5-9201-c49ed4fdefa8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.370465 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-kube-api-access-pp5fx" (OuterVolumeSpecName: "kube-api-access-pp5fx") pod "4570f01f-6639-41a5-9201-c49ed4fdefa8" (UID: "4570f01f-6639-41a5-9201-c49ed4fdefa8"). InnerVolumeSpecName "kube-api-access-pp5fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.385158 4756 generic.go:334] "Generic (PLEG): container finished" podID="4570f01f-6639-41a5-9201-c49ed4fdefa8" containerID="7bd53bf9da974a60581db6630433e3a645a5cc731966117193402da4864891ca" exitCode=0 Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.385223 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4570f01f-6639-41a5-9201-c49ed4fdefa8","Type":"ContainerDied","Data":"7bd53bf9da974a60581db6630433e3a645a5cc731966117193402da4864891ca"} Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.385265 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4570f01f-6639-41a5-9201-c49ed4fdefa8","Type":"ContainerDied","Data":"3a2d9146e97a66b32be0afb4ba71ba2a821e2aec18f6a7f6c60433a540266b4f"} Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.385288 4756 scope.go:117] "RemoveContainer" containerID="7bd53bf9da974a60581db6630433e3a645a5cc731966117193402da4864891ca" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.385487 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.403063 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-config-data" (OuterVolumeSpecName: "config-data") pod "4570f01f-6639-41a5-9201-c49ed4fdefa8" (UID: "4570f01f-6639-41a5-9201-c49ed4fdefa8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.419340 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-server-conf" (OuterVolumeSpecName: "server-conf") pod "4570f01f-6639-41a5-9201-c49ed4fdefa8" (UID: "4570f01f-6639-41a5-9201-c49ed4fdefa8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.431815 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp5fx\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-kube-api-access-pp5fx\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.431874 4756 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-server-conf\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.431888 4756 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4570f01f-6639-41a5-9201-c49ed4fdefa8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.431902 4756 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.431914 4756 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4570f01f-6639-41a5-9201-c49ed4fdefa8-pod-info\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.431928 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.431940 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4570f01f-6639-41a5-9201-c49ed4fdefa8-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.431968 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.432002 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.432017 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.439268 4756 scope.go:117] "RemoveContainer" containerID="759a763cd38e4772ed66917e6f199f0a29ed3545148494464c028d937ead94ed" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.511365 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.515759 4756 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.533933 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4570f01f-6639-41a5-9201-c49ed4fdefa8" (UID: "4570f01f-6639-41a5-9201-c49ed4fdefa8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.535438 4756 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.535508 4756 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4570f01f-6639-41a5-9201-c49ed4fdefa8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.630983 4756 scope.go:117] "RemoveContainer" containerID="7bd53bf9da974a60581db6630433e3a645a5cc731966117193402da4864891ca" Dec 03 11:19:08 crc kubenswrapper[4756]: E1203 11:19:08.631519 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bd53bf9da974a60581db6630433e3a645a5cc731966117193402da4864891ca\": container with ID starting with 7bd53bf9da974a60581db6630433e3a645a5cc731966117193402da4864891ca not found: ID does not exist" containerID="7bd53bf9da974a60581db6630433e3a645a5cc731966117193402da4864891ca" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.631547 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bd53bf9da974a60581db6630433e3a645a5cc731966117193402da4864891ca"} err="failed to get container status \"7bd53bf9da974a60581db6630433e3a645a5cc731966117193402da4864891ca\": rpc error: code = NotFound desc = could not find container \"7bd53bf9da974a60581db6630433e3a645a5cc731966117193402da4864891ca\": container with ID starting with 7bd53bf9da974a60581db6630433e3a645a5cc731966117193402da4864891ca not found: ID does not exist" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.631571 4756 scope.go:117] "RemoveContainer" containerID="759a763cd38e4772ed66917e6f199f0a29ed3545148494464c028d937ead94ed" Dec 03 11:19:08 crc kubenswrapper[4756]: E1203 11:19:08.631930 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"759a763cd38e4772ed66917e6f199f0a29ed3545148494464c028d937ead94ed\": container with ID starting with 759a763cd38e4772ed66917e6f199f0a29ed3545148494464c028d937ead94ed not found: ID does not exist" containerID="759a763cd38e4772ed66917e6f199f0a29ed3545148494464c028d937ead94ed" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.631964 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"759a763cd38e4772ed66917e6f199f0a29ed3545148494464c028d937ead94ed"} err="failed to get container status \"759a763cd38e4772ed66917e6f199f0a29ed3545148494464c028d937ead94ed\": rpc error: code = NotFound desc = could not find container \"759a763cd38e4772ed66917e6f199f0a29ed3545148494464c028d937ead94ed\": container with ID starting with 759a763cd38e4772ed66917e6f199f0a29ed3545148494464c028d937ead94ed not found: ID does not exist" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.745057 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.757142 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.779325 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 11:19:08 crc kubenswrapper[4756]: E1203 11:19:08.780460 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4570f01f-6639-41a5-9201-c49ed4fdefa8" containerName="rabbitmq" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.780635 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4570f01f-6639-41a5-9201-c49ed4fdefa8" containerName="rabbitmq" Dec 03 11:19:08 crc kubenswrapper[4756]: E1203 11:19:08.780730 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4570f01f-6639-41a5-9201-c49ed4fdefa8" containerName="setup-container" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.780805 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4570f01f-6639-41a5-9201-c49ed4fdefa8" containerName="setup-container" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.781198 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="4570f01f-6639-41a5-9201-c49ed4fdefa8" containerName="rabbitmq" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.782498 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.786514 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.786580 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.792656 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.793004 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.793739 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.794005 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-sm9m8" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.795391 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.825777 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.842108 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.842176 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.842219 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.842250 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.854231 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.856601 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.856972 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghf49\" (UniqueName: \"kubernetes.io/projected/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-kube-api-access-ghf49\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.857221 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.857475 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.858079 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.859034 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.996097 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghf49\" (UniqueName: \"kubernetes.io/projected/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-kube-api-access-ghf49\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.996226 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.996270 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.996313 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.996376 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.996422 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.996452 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.996479 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.996509 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.996606 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.996644 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.997860 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.998036 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.998209 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.998270 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.998707 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:08 crc kubenswrapper[4756]: I1203 11:19:08.998884 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:09 crc kubenswrapper[4756]: I1203 11:19:09.001995 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:09 crc kubenswrapper[4756]: I1203 11:19:09.002598 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:09 crc kubenswrapper[4756]: I1203 11:19:09.002973 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:09 crc kubenswrapper[4756]: I1203 11:19:09.004280 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:09 crc kubenswrapper[4756]: I1203 11:19:09.020187 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghf49\" (UniqueName: \"kubernetes.io/projected/1bfa9eab-e774-49f9-b1f6-f2afba51c9ae-kube-api-access-ghf49\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:09 crc kubenswrapper[4756]: I1203 11:19:09.047493 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:09 crc kubenswrapper[4756]: I1203 11:19:09.131086 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:09 crc kubenswrapper[4756]: I1203 11:19:09.264411 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4570f01f-6639-41a5-9201-c49ed4fdefa8" path="/var/lib/kubelet/pods/4570f01f-6639-41a5-9201-c49ed4fdefa8/volumes" Dec 03 11:19:09 crc kubenswrapper[4756]: I1203 11:19:09.276902 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8770c2d-c514-44e9-99d6-c8713f7f9ab1" path="/var/lib/kubelet/pods/b8770c2d-c514-44e9-99d6-c8713f7f9ab1/volumes" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:09.397230 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a347e9e2-376b-44ac-92de-25736c30ec1e","Type":"ContainerStarted","Data":"93dd3c22c75d4d06610365dba0067f7138aa283478a7a7a24e0db9cea9afdf86"} Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.090628 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-x7bcn"] Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.093241 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.098237 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.109849 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-x7bcn"] Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.227458 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.227532 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.227561 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt6hm\" (UniqueName: \"kubernetes.io/projected/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-kube-api-access-zt6hm\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.227631 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-config\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.227652 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.227711 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.227734 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.307306 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-x7bcn"] Dec 03 11:19:12 crc kubenswrapper[4756]: E1203 11:19:10.308445 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-zt6hm openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" podUID="c41bdce6-cf73-485b-9eee-2e89d7d1a7a2" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.330272 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.330409 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.330496 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.330539 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt6hm\" (UniqueName: \"kubernetes.io/projected/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-kube-api-access-zt6hm\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.330719 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-config\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.330762 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.330970 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.331708 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.331812 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.331974 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.332626 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.332838 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.333471 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-config\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.357535 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-rb9d7"] Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.360215 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.373905 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-rb9d7"] Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.374372 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt6hm\" (UniqueName: \"kubernetes.io/projected/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-kube-api-access-zt6hm\") pod \"dnsmasq-dns-79bd4cc8c9-x7bcn\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.409185 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.433531 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-dns-svc\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.433602 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.433773 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-config\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.434133 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.434207 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.434491 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.434543 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq2sk\" (UniqueName: \"kubernetes.io/projected/cdfe5594-c723-4251-9daa-64c59d20f048-kube-api-access-qq2sk\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.446420 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.536063 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-openstack-edpm-ipam\") pod \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.536161 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-dns-svc\") pod \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.536211 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt6hm\" (UniqueName: \"kubernetes.io/projected/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-kube-api-access-zt6hm\") pod \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.536305 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-config\") pod \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.536455 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-ovsdbserver-nb\") pod \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.536603 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-ovsdbserver-sb\") pod \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.536638 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-dns-swift-storage-0\") pod \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\" (UID: \"c41bdce6-cf73-485b-9eee-2e89d7d1a7a2\") " Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.536912 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c41bdce6-cf73-485b-9eee-2e89d7d1a7a2" (UID: "c41bdce6-cf73-485b-9eee-2e89d7d1a7a2"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.537020 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c41bdce6-cf73-485b-9eee-2e89d7d1a7a2" (UID: "c41bdce6-cf73-485b-9eee-2e89d7d1a7a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.537056 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.537196 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.537216 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq2sk\" (UniqueName: \"kubernetes.io/projected/cdfe5594-c723-4251-9daa-64c59d20f048-kube-api-access-qq2sk\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.537254 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-dns-svc\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.537271 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.537301 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-config\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.537353 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c41bdce6-cf73-485b-9eee-2e89d7d1a7a2" (UID: "c41bdce6-cf73-485b-9eee-2e89d7d1a7a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.537414 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.537488 4756 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.537501 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.537500 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-config" (OuterVolumeSpecName: "config") pod "c41bdce6-cf73-485b-9eee-2e89d7d1a7a2" (UID: "c41bdce6-cf73-485b-9eee-2e89d7d1a7a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.537514 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.537615 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c41bdce6-cf73-485b-9eee-2e89d7d1a7a2" (UID: "c41bdce6-cf73-485b-9eee-2e89d7d1a7a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.538188 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c41bdce6-cf73-485b-9eee-2e89d7d1a7a2" (UID: "c41bdce6-cf73-485b-9eee-2e89d7d1a7a2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.538534 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.538641 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.538759 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-dns-svc\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.539287 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.539522 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-config\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.539645 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cdfe5594-c723-4251-9daa-64c59d20f048-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.542774 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-kube-api-access-zt6hm" (OuterVolumeSpecName: "kube-api-access-zt6hm") pod "c41bdce6-cf73-485b-9eee-2e89d7d1a7a2" (UID: "c41bdce6-cf73-485b-9eee-2e89d7d1a7a2"). InnerVolumeSpecName "kube-api-access-zt6hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.559671 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq2sk\" (UniqueName: \"kubernetes.io/projected/cdfe5594-c723-4251-9daa-64c59d20f048-kube-api-access-qq2sk\") pod \"dnsmasq-dns-55478c4467-rb9d7\" (UID: \"cdfe5594-c723-4251-9daa-64c59d20f048\") " pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.640048 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt6hm\" (UniqueName: \"kubernetes.io/projected/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-kube-api-access-zt6hm\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.640096 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.640112 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.640124 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:10.742154 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:11.440874 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a347e9e2-376b-44ac-92de-25736c30ec1e","Type":"ContainerStarted","Data":"ddc0e7c56a793b6aaaee1fcf5009d7792180deda4b503756726991e69df5938a"} Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:11.440893 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-x7bcn" Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:11.521174 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-x7bcn"] Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:11.531500 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-x7bcn"] Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:12.445268 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 11:19:12 crc kubenswrapper[4756]: I1203 11:19:12.454751 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-rb9d7"] Dec 03 11:19:13 crc kubenswrapper[4756]: I1203 11:19:13.246010 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41bdce6-cf73-485b-9eee-2e89d7d1a7a2" path="/var/lib/kubelet/pods/c41bdce6-cf73-485b-9eee-2e89d7d1a7a2/volumes" Dec 03 11:19:13 crc kubenswrapper[4756]: I1203 11:19:13.464159 4756 generic.go:334] "Generic (PLEG): container finished" podID="cdfe5594-c723-4251-9daa-64c59d20f048" containerID="8e278b11c165e899bea70936693800b1463e3bd8cd99fda5b5df2d3fb9315a50" exitCode=0 Dec 03 11:19:13 crc kubenswrapper[4756]: I1203 11:19:13.464326 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-rb9d7" event={"ID":"cdfe5594-c723-4251-9daa-64c59d20f048","Type":"ContainerDied","Data":"8e278b11c165e899bea70936693800b1463e3bd8cd99fda5b5df2d3fb9315a50"} Dec 03 11:19:13 crc kubenswrapper[4756]: I1203 11:19:13.464368 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-rb9d7" event={"ID":"cdfe5594-c723-4251-9daa-64c59d20f048","Type":"ContainerStarted","Data":"2f3bcfd3a3c9cabfdbb645b1f3a5d6ae35ccc0c75005f8227a8ead63494fb0fe"} Dec 03 11:19:13 crc kubenswrapper[4756]: I1203 11:19:13.467611 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae","Type":"ContainerStarted","Data":"0c5ddaee1d874b64a2299d5501a4142a66ae3287633e8d6b9b1e0720f0154c5b"} Dec 03 11:19:14 crc kubenswrapper[4756]: I1203 11:19:14.492194 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-rb9d7" event={"ID":"cdfe5594-c723-4251-9daa-64c59d20f048","Type":"ContainerStarted","Data":"e0aef74c5fbc7e764284ac03aafcb067b48456fd5c4f3cb13465eb4722f21a9c"} Dec 03 11:19:14 crc kubenswrapper[4756]: I1203 11:19:14.492874 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:14 crc kubenswrapper[4756]: I1203 11:19:14.495066 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae","Type":"ContainerStarted","Data":"68f497219e3b741e45940e92ec61eb20a1c05b4890ed23b01e2e6632faac74ea"} Dec 03 11:19:14 crc kubenswrapper[4756]: I1203 11:19:14.525326 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-rb9d7" podStartSLOduration=4.525296451 podStartE2EDuration="4.525296451s" podCreationTimestamp="2025-12-03 11:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:19:14.522238355 +0000 UTC m=+1565.552239609" watchObservedRunningTime="2025-12-03 11:19:14.525296451 +0000 UTC m=+1565.555297705" Dec 03 11:19:20 crc kubenswrapper[4756]: I1203 11:19:20.743627 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-rb9d7" Dec 03 11:19:20 crc kubenswrapper[4756]: I1203 11:19:20.844088 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xlvvh"] Dec 03 11:19:20 crc kubenswrapper[4756]: I1203 11:19:20.844481 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" podUID="0f54d4af-c690-4860-924b-83eacde4c90c" containerName="dnsmasq-dns" containerID="cri-o://8af208a63b41b71f0949e39bfef7ad73531b0ae367996fad9f0574d9a9710a01" gracePeriod=10 Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.490777 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.588745 4756 generic.go:334] "Generic (PLEG): container finished" podID="0f54d4af-c690-4860-924b-83eacde4c90c" containerID="8af208a63b41b71f0949e39bfef7ad73531b0ae367996fad9f0574d9a9710a01" exitCode=0 Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.588816 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" event={"ID":"0f54d4af-c690-4860-924b-83eacde4c90c","Type":"ContainerDied","Data":"8af208a63b41b71f0949e39bfef7ad73531b0ae367996fad9f0574d9a9710a01"} Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.588881 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" event={"ID":"0f54d4af-c690-4860-924b-83eacde4c90c","Type":"ContainerDied","Data":"f925e6ce5d6d8a9070b248de2a21b963d030dbeb10f60bab91b1a0fced0aa560"} Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.588906 4756 scope.go:117] "RemoveContainer" containerID="8af208a63b41b71f0949e39bfef7ad73531b0ae367996fad9f0574d9a9710a01" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.589074 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xlvvh" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.603632 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-dns-swift-storage-0\") pod \"0f54d4af-c690-4860-924b-83eacde4c90c\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.603772 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lvv4\" (UniqueName: \"kubernetes.io/projected/0f54d4af-c690-4860-924b-83eacde4c90c-kube-api-access-5lvv4\") pod \"0f54d4af-c690-4860-924b-83eacde4c90c\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.603825 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-dns-svc\") pod \"0f54d4af-c690-4860-924b-83eacde4c90c\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.604070 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-ovsdbserver-sb\") pod \"0f54d4af-c690-4860-924b-83eacde4c90c\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.604103 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-ovsdbserver-nb\") pod \"0f54d4af-c690-4860-924b-83eacde4c90c\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.604135 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-config\") pod \"0f54d4af-c690-4860-924b-83eacde4c90c\" (UID: \"0f54d4af-c690-4860-924b-83eacde4c90c\") " Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.612344 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f54d4af-c690-4860-924b-83eacde4c90c-kube-api-access-5lvv4" (OuterVolumeSpecName: "kube-api-access-5lvv4") pod "0f54d4af-c690-4860-924b-83eacde4c90c" (UID: "0f54d4af-c690-4860-924b-83eacde4c90c"). InnerVolumeSpecName "kube-api-access-5lvv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.628241 4756 scope.go:117] "RemoveContainer" containerID="7349e90dea36ebdd316693f48beaf510f617c3a10939ed26a4226ed96204d20c" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.675799 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0f54d4af-c690-4860-924b-83eacde4c90c" (UID: "0f54d4af-c690-4860-924b-83eacde4c90c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.677591 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0f54d4af-c690-4860-924b-83eacde4c90c" (UID: "0f54d4af-c690-4860-924b-83eacde4c90c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.697107 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0f54d4af-c690-4860-924b-83eacde4c90c" (UID: "0f54d4af-c690-4860-924b-83eacde4c90c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.707685 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.707740 4756 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.707754 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lvv4\" (UniqueName: \"kubernetes.io/projected/0f54d4af-c690-4860-924b-83eacde4c90c-kube-api-access-5lvv4\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.707768 4756 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.710560 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-config" (OuterVolumeSpecName: "config") pod "0f54d4af-c690-4860-924b-83eacde4c90c" (UID: "0f54d4af-c690-4860-924b-83eacde4c90c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.712444 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0f54d4af-c690-4860-924b-83eacde4c90c" (UID: "0f54d4af-c690-4860-924b-83eacde4c90c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.781970 4756 scope.go:117] "RemoveContainer" containerID="8af208a63b41b71f0949e39bfef7ad73531b0ae367996fad9f0574d9a9710a01" Dec 03 11:19:21 crc kubenswrapper[4756]: E1203 11:19:21.782443 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af208a63b41b71f0949e39bfef7ad73531b0ae367996fad9f0574d9a9710a01\": container with ID starting with 8af208a63b41b71f0949e39bfef7ad73531b0ae367996fad9f0574d9a9710a01 not found: ID does not exist" containerID="8af208a63b41b71f0949e39bfef7ad73531b0ae367996fad9f0574d9a9710a01" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.782474 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af208a63b41b71f0949e39bfef7ad73531b0ae367996fad9f0574d9a9710a01"} err="failed to get container status \"8af208a63b41b71f0949e39bfef7ad73531b0ae367996fad9f0574d9a9710a01\": rpc error: code = NotFound desc = could not find container \"8af208a63b41b71f0949e39bfef7ad73531b0ae367996fad9f0574d9a9710a01\": container with ID starting with 8af208a63b41b71f0949e39bfef7ad73531b0ae367996fad9f0574d9a9710a01 not found: ID does not exist" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.782498 4756 scope.go:117] "RemoveContainer" containerID="7349e90dea36ebdd316693f48beaf510f617c3a10939ed26a4226ed96204d20c" Dec 03 11:19:21 crc kubenswrapper[4756]: E1203 11:19:21.783164 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7349e90dea36ebdd316693f48beaf510f617c3a10939ed26a4226ed96204d20c\": container with ID starting with 7349e90dea36ebdd316693f48beaf510f617c3a10939ed26a4226ed96204d20c not found: ID does not exist" containerID="7349e90dea36ebdd316693f48beaf510f617c3a10939ed26a4226ed96204d20c" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.783203 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7349e90dea36ebdd316693f48beaf510f617c3a10939ed26a4226ed96204d20c"} err="failed to get container status \"7349e90dea36ebdd316693f48beaf510f617c3a10939ed26a4226ed96204d20c\": rpc error: code = NotFound desc = could not find container \"7349e90dea36ebdd316693f48beaf510f617c3a10939ed26a4226ed96204d20c\": container with ID starting with 7349e90dea36ebdd316693f48beaf510f617c3a10939ed26a4226ed96204d20c not found: ID does not exist" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.809509 4756 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.809594 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f54d4af-c690-4860-924b-83eacde4c90c-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.929761 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xlvvh"] Dec 03 11:19:21 crc kubenswrapper[4756]: I1203 11:19:21.940877 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xlvvh"] Dec 03 11:19:22 crc kubenswrapper[4756]: I1203 11:19:22.607795 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:19:22 crc kubenswrapper[4756]: I1203 11:19:22.608153 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:19:22 crc kubenswrapper[4756]: I1203 11:19:22.608222 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 11:19:22 crc kubenswrapper[4756]: I1203 11:19:22.609333 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 11:19:22 crc kubenswrapper[4756]: I1203 11:19:22.609413 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" gracePeriod=600 Dec 03 11:19:23 crc kubenswrapper[4756]: E1203 11:19:23.247371 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:19:23 crc kubenswrapper[4756]: I1203 11:19:23.251758 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f54d4af-c690-4860-924b-83eacde4c90c" path="/var/lib/kubelet/pods/0f54d4af-c690-4860-924b-83eacde4c90c/volumes" Dec 03 11:19:23 crc kubenswrapper[4756]: I1203 11:19:23.617328 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" exitCode=0 Dec 03 11:19:23 crc kubenswrapper[4756]: I1203 11:19:23.617402 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367"} Dec 03 11:19:23 crc kubenswrapper[4756]: I1203 11:19:23.617471 4756 scope.go:117] "RemoveContainer" containerID="e0355b17b2ab1c82e928552e43898507356fa2e99840efcf360f3485a28faa26" Dec 03 11:19:23 crc kubenswrapper[4756]: I1203 11:19:23.618491 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:19:23 crc kubenswrapper[4756]: E1203 11:19:23.619091 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.109467 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22"] Dec 03 11:19:29 crc kubenswrapper[4756]: E1203 11:19:29.111126 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f54d4af-c690-4860-924b-83eacde4c90c" containerName="dnsmasq-dns" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.111149 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f54d4af-c690-4860-924b-83eacde4c90c" containerName="dnsmasq-dns" Dec 03 11:19:29 crc kubenswrapper[4756]: E1203 11:19:29.111172 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f54d4af-c690-4860-924b-83eacde4c90c" containerName="init" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.111178 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f54d4af-c690-4860-924b-83eacde4c90c" containerName="init" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.111436 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f54d4af-c690-4860-924b-83eacde4c90c" containerName="dnsmasq-dns" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.112331 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.117160 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.117377 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.117216 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.117505 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.127723 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22"] Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.189516 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-klk22\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.189633 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-klk22\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.189792 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-klk22\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.189836 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2795q\" (UniqueName: \"kubernetes.io/projected/1aab3068-a578-4f78-8326-53aeb4dd74bf-kube-api-access-2795q\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-klk22\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.291988 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-klk22\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.292097 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-klk22\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.292217 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-klk22\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.292258 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2795q\" (UniqueName: \"kubernetes.io/projected/1aab3068-a578-4f78-8326-53aeb4dd74bf-kube-api-access-2795q\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-klk22\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.300668 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-klk22\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.302687 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-klk22\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.302844 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-klk22\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.324923 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2795q\" (UniqueName: \"kubernetes.io/projected/1aab3068-a578-4f78-8326-53aeb4dd74bf-kube-api-access-2795q\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-klk22\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:19:29 crc kubenswrapper[4756]: I1203 11:19:29.436432 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:19:30 crc kubenswrapper[4756]: I1203 11:19:30.234682 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22"] Dec 03 11:19:30 crc kubenswrapper[4756]: I1203 11:19:30.708819 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" event={"ID":"1aab3068-a578-4f78-8326-53aeb4dd74bf","Type":"ContainerStarted","Data":"4873336642b67152c2f6ef588726e059e773614b3c2ced400eddd6840cd29acd"} Dec 03 11:19:39 crc kubenswrapper[4756]: I1203 11:19:39.249222 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:19:39 crc kubenswrapper[4756]: E1203 11:19:39.250451 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:19:44 crc kubenswrapper[4756]: I1203 11:19:44.911693 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" event={"ID":"1aab3068-a578-4f78-8326-53aeb4dd74bf","Type":"ContainerStarted","Data":"ce369edaca4b1f592dc8e745ac48419c90bfd7160f87d7eab24758c0fbd74146"} Dec 03 11:19:44 crc kubenswrapper[4756]: I1203 11:19:44.914858 4756 generic.go:334] "Generic (PLEG): container finished" podID="a347e9e2-376b-44ac-92de-25736c30ec1e" containerID="ddc0e7c56a793b6aaaee1fcf5009d7792180deda4b503756726991e69df5938a" exitCode=0 Dec 03 11:19:44 crc kubenswrapper[4756]: I1203 11:19:44.915019 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a347e9e2-376b-44ac-92de-25736c30ec1e","Type":"ContainerDied","Data":"ddc0e7c56a793b6aaaee1fcf5009d7792180deda4b503756726991e69df5938a"} Dec 03 11:19:45 crc kubenswrapper[4756]: I1203 11:19:45.928268 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a347e9e2-376b-44ac-92de-25736c30ec1e","Type":"ContainerStarted","Data":"40a0ed57d944b296975bab218c86b107fb4047dc977e9b334aa0ab4159943436"} Dec 03 11:19:45 crc kubenswrapper[4756]: I1203 11:19:45.929087 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 03 11:19:45 crc kubenswrapper[4756]: I1203 11:19:45.951353 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" podStartSLOduration=2.61094807 podStartE2EDuration="16.951327764s" podCreationTimestamp="2025-12-03 11:19:29 +0000 UTC" firstStartedPulling="2025-12-03 11:19:30.243879036 +0000 UTC m=+1581.273880280" lastFinishedPulling="2025-12-03 11:19:44.58425873 +0000 UTC m=+1595.614259974" observedRunningTime="2025-12-03 11:19:45.950389455 +0000 UTC m=+1596.980390699" watchObservedRunningTime="2025-12-03 11:19:45.951327764 +0000 UTC m=+1596.981329008" Dec 03 11:19:45 crc kubenswrapper[4756]: I1203 11:19:45.991157 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.991128954 podStartE2EDuration="38.991128954s" podCreationTimestamp="2025-12-03 11:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:19:45.985767816 +0000 UTC m=+1597.015769060" watchObservedRunningTime="2025-12-03 11:19:45.991128954 +0000 UTC m=+1597.021130198" Dec 03 11:19:46 crc kubenswrapper[4756]: I1203 11:19:46.941485 4756 generic.go:334] "Generic (PLEG): container finished" podID="1bfa9eab-e774-49f9-b1f6-f2afba51c9ae" containerID="68f497219e3b741e45940e92ec61eb20a1c05b4890ed23b01e2e6632faac74ea" exitCode=0 Dec 03 11:19:46 crc kubenswrapper[4756]: I1203 11:19:46.941570 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae","Type":"ContainerDied","Data":"68f497219e3b741e45940e92ec61eb20a1c05b4890ed23b01e2e6632faac74ea"} Dec 03 11:19:47 crc kubenswrapper[4756]: I1203 11:19:47.145387 4756 scope.go:117] "RemoveContainer" containerID="a28f520e94ce480a4d3fb55960962902ba76bfab9a9e17ec5191a5a39a3312c3" Dec 03 11:19:49 crc kubenswrapper[4756]: I1203 11:19:49.134289 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1bfa9eab-e774-49f9-b1f6-f2afba51c9ae","Type":"ContainerStarted","Data":"3c2e69d9ce6529824f2180ec5ba1193c956ddd57813b9bc04ae1e66853837a34"} Dec 03 11:19:49 crc kubenswrapper[4756]: I1203 11:19:49.136109 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:19:49 crc kubenswrapper[4756]: I1203 11:19:49.171855 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.171831589 podStartE2EDuration="41.171831589s" podCreationTimestamp="2025-12-03 11:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:19:49.169386883 +0000 UTC m=+1600.199388137" watchObservedRunningTime="2025-12-03 11:19:49.171831589 +0000 UTC m=+1600.201832833" Dec 03 11:19:52 crc kubenswrapper[4756]: I1203 11:19:52.234586 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:19:52 crc kubenswrapper[4756]: E1203 11:19:52.235453 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:19:52 crc kubenswrapper[4756]: I1203 11:19:52.693863 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9v6k6"] Dec 03 11:19:52 crc kubenswrapper[4756]: I1203 11:19:52.697708 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:19:52 crc kubenswrapper[4756]: I1203 11:19:52.724605 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9v6k6"] Dec 03 11:19:52 crc kubenswrapper[4756]: I1203 11:19:52.819502 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b3f6c6-2435-49bb-958d-7be3eb83b624-utilities\") pod \"redhat-marketplace-9v6k6\" (UID: \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\") " pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:19:52 crc kubenswrapper[4756]: I1203 11:19:52.819725 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9c9q\" (UniqueName: \"kubernetes.io/projected/a4b3f6c6-2435-49bb-958d-7be3eb83b624-kube-api-access-h9c9q\") pod \"redhat-marketplace-9v6k6\" (UID: \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\") " pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:19:52 crc kubenswrapper[4756]: I1203 11:19:52.820024 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b3f6c6-2435-49bb-958d-7be3eb83b624-catalog-content\") pod \"redhat-marketplace-9v6k6\" (UID: \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\") " pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:19:52 crc kubenswrapper[4756]: I1203 11:19:52.922584 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9c9q\" (UniqueName: \"kubernetes.io/projected/a4b3f6c6-2435-49bb-958d-7be3eb83b624-kube-api-access-h9c9q\") pod \"redhat-marketplace-9v6k6\" (UID: \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\") " pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:19:52 crc kubenswrapper[4756]: I1203 11:19:52.922704 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b3f6c6-2435-49bb-958d-7be3eb83b624-catalog-content\") pod \"redhat-marketplace-9v6k6\" (UID: \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\") " pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:19:52 crc kubenswrapper[4756]: I1203 11:19:52.922808 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b3f6c6-2435-49bb-958d-7be3eb83b624-utilities\") pod \"redhat-marketplace-9v6k6\" (UID: \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\") " pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:19:52 crc kubenswrapper[4756]: I1203 11:19:52.923396 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b3f6c6-2435-49bb-958d-7be3eb83b624-utilities\") pod \"redhat-marketplace-9v6k6\" (UID: \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\") " pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:19:52 crc kubenswrapper[4756]: I1203 11:19:52.923453 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b3f6c6-2435-49bb-958d-7be3eb83b624-catalog-content\") pod \"redhat-marketplace-9v6k6\" (UID: \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\") " pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:19:52 crc kubenswrapper[4756]: I1203 11:19:52.946363 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9c9q\" (UniqueName: \"kubernetes.io/projected/a4b3f6c6-2435-49bb-958d-7be3eb83b624-kube-api-access-h9c9q\") pod \"redhat-marketplace-9v6k6\" (UID: \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\") " pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:19:53 crc kubenswrapper[4756]: I1203 11:19:53.022651 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:19:53 crc kubenswrapper[4756]: I1203 11:19:53.626092 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9v6k6"] Dec 03 11:19:54 crc kubenswrapper[4756]: I1203 11:19:54.196396 4756 generic.go:334] "Generic (PLEG): container finished" podID="a4b3f6c6-2435-49bb-958d-7be3eb83b624" containerID="9c332b5f4633ae04d9f2dc7611a3d665c7ce2829762db7bba05216521273fd24" exitCode=0 Dec 03 11:19:54 crc kubenswrapper[4756]: I1203 11:19:54.196508 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9v6k6" event={"ID":"a4b3f6c6-2435-49bb-958d-7be3eb83b624","Type":"ContainerDied","Data":"9c332b5f4633ae04d9f2dc7611a3d665c7ce2829762db7bba05216521273fd24"} Dec 03 11:19:54 crc kubenswrapper[4756]: I1203 11:19:54.196986 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9v6k6" event={"ID":"a4b3f6c6-2435-49bb-958d-7be3eb83b624","Type":"ContainerStarted","Data":"fdacb3e45785ec646beca44807d6bd244f4b801bd62f6421fb7561995cf7d3c3"} Dec 03 11:19:56 crc kubenswrapper[4756]: I1203 11:19:56.222652 4756 generic.go:334] "Generic (PLEG): container finished" podID="a4b3f6c6-2435-49bb-958d-7be3eb83b624" containerID="ad619541511b4647b36fb2ff4120fb3846a88629dc4fe0069b7279256843d273" exitCode=0 Dec 03 11:19:56 crc kubenswrapper[4756]: I1203 11:19:56.222721 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9v6k6" event={"ID":"a4b3f6c6-2435-49bb-958d-7be3eb83b624","Type":"ContainerDied","Data":"ad619541511b4647b36fb2ff4120fb3846a88629dc4fe0069b7279256843d273"} Dec 03 11:19:57 crc kubenswrapper[4756]: I1203 11:19:57.284169 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9v6k6" event={"ID":"a4b3f6c6-2435-49bb-958d-7be3eb83b624","Type":"ContainerStarted","Data":"34b4508f32f0d34d2ff58028102e76a35c301958c35a2b61e287c109e9aaaa12"} Dec 03 11:19:57 crc kubenswrapper[4756]: I1203 11:19:57.331873 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9v6k6" podStartSLOduration=2.658490237 podStartE2EDuration="5.331840862s" podCreationTimestamp="2025-12-03 11:19:52 +0000 UTC" firstStartedPulling="2025-12-03 11:19:54.199360119 +0000 UTC m=+1605.229361363" lastFinishedPulling="2025-12-03 11:19:56.872710744 +0000 UTC m=+1607.902711988" observedRunningTime="2025-12-03 11:19:57.31637064 +0000 UTC m=+1608.346371894" watchObservedRunningTime="2025-12-03 11:19:57.331840862 +0000 UTC m=+1608.361842106" Dec 03 11:19:57 crc kubenswrapper[4756]: I1203 11:19:57.829385 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a347e9e2-376b-44ac-92de-25736c30ec1e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.209:5671: connect: connection refused" Dec 03 11:19:59 crc kubenswrapper[4756]: I1203 11:19:59.134481 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="1bfa9eab-e774-49f9-b1f6-f2afba51c9ae" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.210:5671: connect: connection refused" Dec 03 11:20:02 crc kubenswrapper[4756]: I1203 11:20:02.340649 4756 generic.go:334] "Generic (PLEG): container finished" podID="1aab3068-a578-4f78-8326-53aeb4dd74bf" containerID="ce369edaca4b1f592dc8e745ac48419c90bfd7160f87d7eab24758c0fbd74146" exitCode=0 Dec 03 11:20:02 crc kubenswrapper[4756]: I1203 11:20:02.340777 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" event={"ID":"1aab3068-a578-4f78-8326-53aeb4dd74bf","Type":"ContainerDied","Data":"ce369edaca4b1f592dc8e745ac48419c90bfd7160f87d7eab24758c0fbd74146"} Dec 03 11:20:03 crc kubenswrapper[4756]: I1203 11:20:03.024626 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:20:03 crc kubenswrapper[4756]: I1203 11:20:03.024697 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:20:03 crc kubenswrapper[4756]: I1203 11:20:03.083337 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:20:03 crc kubenswrapper[4756]: I1203 11:20:03.409681 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:20:03 crc kubenswrapper[4756]: I1203 11:20:03.468246 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9v6k6"] Dec 03 11:20:03 crc kubenswrapper[4756]: I1203 11:20:03.842113 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:20:03 crc kubenswrapper[4756]: I1203 11:20:03.992549 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2795q\" (UniqueName: \"kubernetes.io/projected/1aab3068-a578-4f78-8326-53aeb4dd74bf-kube-api-access-2795q\") pod \"1aab3068-a578-4f78-8326-53aeb4dd74bf\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " Dec 03 11:20:03 crc kubenswrapper[4756]: I1203 11:20:03.992620 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-repo-setup-combined-ca-bundle\") pod \"1aab3068-a578-4f78-8326-53aeb4dd74bf\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " Dec 03 11:20:03 crc kubenswrapper[4756]: I1203 11:20:03.992841 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-inventory\") pod \"1aab3068-a578-4f78-8326-53aeb4dd74bf\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " Dec 03 11:20:03 crc kubenswrapper[4756]: I1203 11:20:03.992925 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-ssh-key\") pod \"1aab3068-a578-4f78-8326-53aeb4dd74bf\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.001935 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1aab3068-a578-4f78-8326-53aeb4dd74bf" (UID: "1aab3068-a578-4f78-8326-53aeb4dd74bf"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.002668 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aab3068-a578-4f78-8326-53aeb4dd74bf-kube-api-access-2795q" (OuterVolumeSpecName: "kube-api-access-2795q") pod "1aab3068-a578-4f78-8326-53aeb4dd74bf" (UID: "1aab3068-a578-4f78-8326-53aeb4dd74bf"). InnerVolumeSpecName "kube-api-access-2795q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:20:04 crc kubenswrapper[4756]: E1203 11:20:04.028037 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-inventory podName:1aab3068-a578-4f78-8326-53aeb4dd74bf nodeName:}" failed. No retries permitted until 2025-12-03 11:20:04.527990046 +0000 UTC m=+1615.557991290 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-inventory") pod "1aab3068-a578-4f78-8326-53aeb4dd74bf" (UID: "1aab3068-a578-4f78-8326-53aeb4dd74bf") : error deleting /var/lib/kubelet/pods/1aab3068-a578-4f78-8326-53aeb4dd74bf/volume-subpaths: remove /var/lib/kubelet/pods/1aab3068-a578-4f78-8326-53aeb4dd74bf/volume-subpaths: no such file or directory Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.032654 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1aab3068-a578-4f78-8326-53aeb4dd74bf" (UID: "1aab3068-a578-4f78-8326-53aeb4dd74bf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.096374 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2795q\" (UniqueName: \"kubernetes.io/projected/1aab3068-a578-4f78-8326-53aeb4dd74bf-kube-api-access-2795q\") on node \"crc\" DevicePath \"\"" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.096425 4756 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.096444 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.234590 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:20:04 crc kubenswrapper[4756]: E1203 11:20:04.235390 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.363847 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.363833 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-klk22" event={"ID":"1aab3068-a578-4f78-8326-53aeb4dd74bf","Type":"ContainerDied","Data":"4873336642b67152c2f6ef588726e059e773614b3c2ced400eddd6840cd29acd"} Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.363904 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4873336642b67152c2f6ef588726e059e773614b3c2ced400eddd6840cd29acd" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.550245 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt"] Dec 03 11:20:04 crc kubenswrapper[4756]: E1203 11:20:04.550905 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aab3068-a578-4f78-8326-53aeb4dd74bf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.553508 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aab3068-a578-4f78-8326-53aeb4dd74bf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.554708 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aab3068-a578-4f78-8326-53aeb4dd74bf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.555884 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.565721 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt"] Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.606554 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-inventory\") pod \"1aab3068-a578-4f78-8326-53aeb4dd74bf\" (UID: \"1aab3068-a578-4f78-8326-53aeb4dd74bf\") " Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.612285 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-inventory" (OuterVolumeSpecName: "inventory") pod "1aab3068-a578-4f78-8326-53aeb4dd74bf" (UID: "1aab3068-a578-4f78-8326-53aeb4dd74bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.709679 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88321f96-08ea-4c4d-9665-8530b28e1a66-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xksxt\" (UID: \"88321f96-08ea-4c4d-9665-8530b28e1a66\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.710250 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88321f96-08ea-4c4d-9665-8530b28e1a66-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xksxt\" (UID: \"88321f96-08ea-4c4d-9665-8530b28e1a66\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.710524 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtn67\" (UniqueName: \"kubernetes.io/projected/88321f96-08ea-4c4d-9665-8530b28e1a66-kube-api-access-xtn67\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xksxt\" (UID: \"88321f96-08ea-4c4d-9665-8530b28e1a66\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.710665 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1aab3068-a578-4f78-8326-53aeb4dd74bf-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.812491 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88321f96-08ea-4c4d-9665-8530b28e1a66-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xksxt\" (UID: \"88321f96-08ea-4c4d-9665-8530b28e1a66\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.812700 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88321f96-08ea-4c4d-9665-8530b28e1a66-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xksxt\" (UID: \"88321f96-08ea-4c4d-9665-8530b28e1a66\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.812783 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtn67\" (UniqueName: \"kubernetes.io/projected/88321f96-08ea-4c4d-9665-8530b28e1a66-kube-api-access-xtn67\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xksxt\" (UID: \"88321f96-08ea-4c4d-9665-8530b28e1a66\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.818033 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88321f96-08ea-4c4d-9665-8530b28e1a66-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xksxt\" (UID: \"88321f96-08ea-4c4d-9665-8530b28e1a66\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.824305 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88321f96-08ea-4c4d-9665-8530b28e1a66-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xksxt\" (UID: \"88321f96-08ea-4c4d-9665-8530b28e1a66\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.834937 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtn67\" (UniqueName: \"kubernetes.io/projected/88321f96-08ea-4c4d-9665-8530b28e1a66-kube-api-access-xtn67\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xksxt\" (UID: \"88321f96-08ea-4c4d-9665-8530b28e1a66\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" Dec 03 11:20:04 crc kubenswrapper[4756]: I1203 11:20:04.883976 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" Dec 03 11:20:05 crc kubenswrapper[4756]: I1203 11:20:05.269324 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt"] Dec 03 11:20:05 crc kubenswrapper[4756]: I1203 11:20:05.379909 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9v6k6" podUID="a4b3f6c6-2435-49bb-958d-7be3eb83b624" containerName="registry-server" containerID="cri-o://34b4508f32f0d34d2ff58028102e76a35c301958c35a2b61e287c109e9aaaa12" gracePeriod=2 Dec 03 11:20:05 crc kubenswrapper[4756]: I1203 11:20:05.380588 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" event={"ID":"88321f96-08ea-4c4d-9665-8530b28e1a66","Type":"ContainerStarted","Data":"5993c1ebd1a916963b7a8a00ba642e343b54787cab1c366fa93b40cc16ff8ba9"} Dec 03 11:20:05 crc kubenswrapper[4756]: I1203 11:20:05.893281 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.047552 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b3f6c6-2435-49bb-958d-7be3eb83b624-utilities\") pod \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\" (UID: \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\") " Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.048424 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b3f6c6-2435-49bb-958d-7be3eb83b624-catalog-content\") pod \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\" (UID: \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\") " Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.048506 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9c9q\" (UniqueName: \"kubernetes.io/projected/a4b3f6c6-2435-49bb-958d-7be3eb83b624-kube-api-access-h9c9q\") pod \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\" (UID: \"a4b3f6c6-2435-49bb-958d-7be3eb83b624\") " Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.049214 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4b3f6c6-2435-49bb-958d-7be3eb83b624-utilities" (OuterVolumeSpecName: "utilities") pod "a4b3f6c6-2435-49bb-958d-7be3eb83b624" (UID: "a4b3f6c6-2435-49bb-958d-7be3eb83b624"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.052995 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b3f6c6-2435-49bb-958d-7be3eb83b624-kube-api-access-h9c9q" (OuterVolumeSpecName: "kube-api-access-h9c9q") pod "a4b3f6c6-2435-49bb-958d-7be3eb83b624" (UID: "a4b3f6c6-2435-49bb-958d-7be3eb83b624"). InnerVolumeSpecName "kube-api-access-h9c9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.073238 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4b3f6c6-2435-49bb-958d-7be3eb83b624-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4b3f6c6-2435-49bb-958d-7be3eb83b624" (UID: "a4b3f6c6-2435-49bb-958d-7be3eb83b624"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.152495 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9c9q\" (UniqueName: \"kubernetes.io/projected/a4b3f6c6-2435-49bb-958d-7be3eb83b624-kube-api-access-h9c9q\") on node \"crc\" DevicePath \"\"" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.152641 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4b3f6c6-2435-49bb-958d-7be3eb83b624-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.152661 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4b3f6c6-2435-49bb-958d-7be3eb83b624-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.401381 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" event={"ID":"88321f96-08ea-4c4d-9665-8530b28e1a66","Type":"ContainerStarted","Data":"44da13aaeccc70dc253ac3d3a1a293def904214a6ad4da44d902e45af8b321b1"} Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.404360 4756 generic.go:334] "Generic (PLEG): container finished" podID="a4b3f6c6-2435-49bb-958d-7be3eb83b624" containerID="34b4508f32f0d34d2ff58028102e76a35c301958c35a2b61e287c109e9aaaa12" exitCode=0 Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.404427 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9v6k6" event={"ID":"a4b3f6c6-2435-49bb-958d-7be3eb83b624","Type":"ContainerDied","Data":"34b4508f32f0d34d2ff58028102e76a35c301958c35a2b61e287c109e9aaaa12"} Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.404467 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9v6k6" event={"ID":"a4b3f6c6-2435-49bb-958d-7be3eb83b624","Type":"ContainerDied","Data":"fdacb3e45785ec646beca44807d6bd244f4b801bd62f6421fb7561995cf7d3c3"} Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.404488 4756 scope.go:117] "RemoveContainer" containerID="34b4508f32f0d34d2ff58028102e76a35c301958c35a2b61e287c109e9aaaa12" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.404633 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9v6k6" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.441067 4756 scope.go:117] "RemoveContainer" containerID="ad619541511b4647b36fb2ff4120fb3846a88629dc4fe0069b7279256843d273" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.454261 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" podStartSLOduration=1.730335682 podStartE2EDuration="2.454234245s" podCreationTimestamp="2025-12-03 11:20:04 +0000 UTC" firstStartedPulling="2025-12-03 11:20:05.27720802 +0000 UTC m=+1616.307209264" lastFinishedPulling="2025-12-03 11:20:06.001106583 +0000 UTC m=+1617.031107827" observedRunningTime="2025-12-03 11:20:06.431680202 +0000 UTC m=+1617.461681446" watchObservedRunningTime="2025-12-03 11:20:06.454234245 +0000 UTC m=+1617.484235489" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.460114 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9v6k6"] Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.473475 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9v6k6"] Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.475336 4756 scope.go:117] "RemoveContainer" containerID="9c332b5f4633ae04d9f2dc7611a3d665c7ce2829762db7bba05216521273fd24" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.522727 4756 scope.go:117] "RemoveContainer" containerID="34b4508f32f0d34d2ff58028102e76a35c301958c35a2b61e287c109e9aaaa12" Dec 03 11:20:06 crc kubenswrapper[4756]: E1203 11:20:06.524312 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34b4508f32f0d34d2ff58028102e76a35c301958c35a2b61e287c109e9aaaa12\": container with ID starting with 34b4508f32f0d34d2ff58028102e76a35c301958c35a2b61e287c109e9aaaa12 not found: ID does not exist" containerID="34b4508f32f0d34d2ff58028102e76a35c301958c35a2b61e287c109e9aaaa12" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.524385 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b4508f32f0d34d2ff58028102e76a35c301958c35a2b61e287c109e9aaaa12"} err="failed to get container status \"34b4508f32f0d34d2ff58028102e76a35c301958c35a2b61e287c109e9aaaa12\": rpc error: code = NotFound desc = could not find container \"34b4508f32f0d34d2ff58028102e76a35c301958c35a2b61e287c109e9aaaa12\": container with ID starting with 34b4508f32f0d34d2ff58028102e76a35c301958c35a2b61e287c109e9aaaa12 not found: ID does not exist" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.524438 4756 scope.go:117] "RemoveContainer" containerID="ad619541511b4647b36fb2ff4120fb3846a88629dc4fe0069b7279256843d273" Dec 03 11:20:06 crc kubenswrapper[4756]: E1203 11:20:06.525045 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad619541511b4647b36fb2ff4120fb3846a88629dc4fe0069b7279256843d273\": container with ID starting with ad619541511b4647b36fb2ff4120fb3846a88629dc4fe0069b7279256843d273 not found: ID does not exist" containerID="ad619541511b4647b36fb2ff4120fb3846a88629dc4fe0069b7279256843d273" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.525099 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad619541511b4647b36fb2ff4120fb3846a88629dc4fe0069b7279256843d273"} err="failed to get container status \"ad619541511b4647b36fb2ff4120fb3846a88629dc4fe0069b7279256843d273\": rpc error: code = NotFound desc = could not find container \"ad619541511b4647b36fb2ff4120fb3846a88629dc4fe0069b7279256843d273\": container with ID starting with ad619541511b4647b36fb2ff4120fb3846a88629dc4fe0069b7279256843d273 not found: ID does not exist" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.525132 4756 scope.go:117] "RemoveContainer" containerID="9c332b5f4633ae04d9f2dc7611a3d665c7ce2829762db7bba05216521273fd24" Dec 03 11:20:06 crc kubenswrapper[4756]: E1203 11:20:06.525651 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c332b5f4633ae04d9f2dc7611a3d665c7ce2829762db7bba05216521273fd24\": container with ID starting with 9c332b5f4633ae04d9f2dc7611a3d665c7ce2829762db7bba05216521273fd24 not found: ID does not exist" containerID="9c332b5f4633ae04d9f2dc7611a3d665c7ce2829762db7bba05216521273fd24" Dec 03 11:20:06 crc kubenswrapper[4756]: I1203 11:20:06.525700 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c332b5f4633ae04d9f2dc7611a3d665c7ce2829762db7bba05216521273fd24"} err="failed to get container status \"9c332b5f4633ae04d9f2dc7611a3d665c7ce2829762db7bba05216521273fd24\": rpc error: code = NotFound desc = could not find container \"9c332b5f4633ae04d9f2dc7611a3d665c7ce2829762db7bba05216521273fd24\": container with ID starting with 9c332b5f4633ae04d9f2dc7611a3d665c7ce2829762db7bba05216521273fd24 not found: ID does not exist" Dec 03 11:20:07 crc kubenswrapper[4756]: I1203 11:20:07.251204 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4b3f6c6-2435-49bb-958d-7be3eb83b624" path="/var/lib/kubelet/pods/a4b3f6c6-2435-49bb-958d-7be3eb83b624/volumes" Dec 03 11:20:07 crc kubenswrapper[4756]: I1203 11:20:07.829468 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 03 11:20:09 crc kubenswrapper[4756]: I1203 11:20:09.134251 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 03 11:20:09 crc kubenswrapper[4756]: I1203 11:20:09.454013 4756 generic.go:334] "Generic (PLEG): container finished" podID="88321f96-08ea-4c4d-9665-8530b28e1a66" containerID="44da13aaeccc70dc253ac3d3a1a293def904214a6ad4da44d902e45af8b321b1" exitCode=0 Dec 03 11:20:09 crc kubenswrapper[4756]: I1203 11:20:09.454144 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" event={"ID":"88321f96-08ea-4c4d-9665-8530b28e1a66","Type":"ContainerDied","Data":"44da13aaeccc70dc253ac3d3a1a293def904214a6ad4da44d902e45af8b321b1"} Dec 03 11:20:10 crc kubenswrapper[4756]: I1203 11:20:10.966898 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.117534 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88321f96-08ea-4c4d-9665-8530b28e1a66-ssh-key\") pod \"88321f96-08ea-4c4d-9665-8530b28e1a66\" (UID: \"88321f96-08ea-4c4d-9665-8530b28e1a66\") " Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.117877 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtn67\" (UniqueName: \"kubernetes.io/projected/88321f96-08ea-4c4d-9665-8530b28e1a66-kube-api-access-xtn67\") pod \"88321f96-08ea-4c4d-9665-8530b28e1a66\" (UID: \"88321f96-08ea-4c4d-9665-8530b28e1a66\") " Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.117909 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88321f96-08ea-4c4d-9665-8530b28e1a66-inventory\") pod \"88321f96-08ea-4c4d-9665-8530b28e1a66\" (UID: \"88321f96-08ea-4c4d-9665-8530b28e1a66\") " Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.124532 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88321f96-08ea-4c4d-9665-8530b28e1a66-kube-api-access-xtn67" (OuterVolumeSpecName: "kube-api-access-xtn67") pod "88321f96-08ea-4c4d-9665-8530b28e1a66" (UID: "88321f96-08ea-4c4d-9665-8530b28e1a66"). InnerVolumeSpecName "kube-api-access-xtn67". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.149674 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88321f96-08ea-4c4d-9665-8530b28e1a66-inventory" (OuterVolumeSpecName: "inventory") pod "88321f96-08ea-4c4d-9665-8530b28e1a66" (UID: "88321f96-08ea-4c4d-9665-8530b28e1a66"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.151271 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88321f96-08ea-4c4d-9665-8530b28e1a66-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "88321f96-08ea-4c4d-9665-8530b28e1a66" (UID: "88321f96-08ea-4c4d-9665-8530b28e1a66"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.223983 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88321f96-08ea-4c4d-9665-8530b28e1a66-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.224035 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtn67\" (UniqueName: \"kubernetes.io/projected/88321f96-08ea-4c4d-9665-8530b28e1a66-kube-api-access-xtn67\") on node \"crc\" DevicePath \"\"" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.224054 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88321f96-08ea-4c4d-9665-8530b28e1a66-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.479619 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" event={"ID":"88321f96-08ea-4c4d-9665-8530b28e1a66","Type":"ContainerDied","Data":"5993c1ebd1a916963b7a8a00ba642e343b54787cab1c366fa93b40cc16ff8ba9"} Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.479680 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5993c1ebd1a916963b7a8a00ba642e343b54787cab1c366fa93b40cc16ff8ba9" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.479738 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xksxt" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.576522 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b"] Dec 03 11:20:11 crc kubenswrapper[4756]: E1203 11:20:11.577648 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b3f6c6-2435-49bb-958d-7be3eb83b624" containerName="extract-utilities" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.577672 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b3f6c6-2435-49bb-958d-7be3eb83b624" containerName="extract-utilities" Dec 03 11:20:11 crc kubenswrapper[4756]: E1203 11:20:11.577688 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b3f6c6-2435-49bb-958d-7be3eb83b624" containerName="registry-server" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.577694 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b3f6c6-2435-49bb-958d-7be3eb83b624" containerName="registry-server" Dec 03 11:20:11 crc kubenswrapper[4756]: E1203 11:20:11.577726 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b3f6c6-2435-49bb-958d-7be3eb83b624" containerName="extract-content" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.577746 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b3f6c6-2435-49bb-958d-7be3eb83b624" containerName="extract-content" Dec 03 11:20:11 crc kubenswrapper[4756]: E1203 11:20:11.577772 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88321f96-08ea-4c4d-9665-8530b28e1a66" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.577784 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="88321f96-08ea-4c4d-9665-8530b28e1a66" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.578077 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="88321f96-08ea-4c4d-9665-8530b28e1a66" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.578103 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b3f6c6-2435-49bb-958d-7be3eb83b624" containerName="registry-server" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.578993 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.581558 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.582296 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.582352 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.582805 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.593878 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b"] Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.736289 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.736381 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87ngk\" (UniqueName: \"kubernetes.io/projected/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-kube-api-access-87ngk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.736627 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.737350 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.839872 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.840045 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87ngk\" (UniqueName: \"kubernetes.io/projected/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-kube-api-access-87ngk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.840628 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.841312 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.846001 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.850138 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.857392 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.859967 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87ngk\" (UniqueName: \"kubernetes.io/projected/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-kube-api-access-87ngk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:20:11 crc kubenswrapper[4756]: I1203 11:20:11.911082 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:20:12 crc kubenswrapper[4756]: W1203 11:20:12.501669 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e68e04a_ae64_43a6_8dd0_a3fdeb9642e2.slice/crio-702bb54c61aebe912d4f7639ec69238fe5a6762632a1119a5923c7c789b1f909 WatchSource:0}: Error finding container 702bb54c61aebe912d4f7639ec69238fe5a6762632a1119a5923c7c789b1f909: Status 404 returned error can't find the container with id 702bb54c61aebe912d4f7639ec69238fe5a6762632a1119a5923c7c789b1f909 Dec 03 11:20:12 crc kubenswrapper[4756]: I1203 11:20:12.504921 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b"] Dec 03 11:20:13 crc kubenswrapper[4756]: I1203 11:20:13.506484 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" event={"ID":"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2","Type":"ContainerStarted","Data":"93b3e9a168914981798a8ff6360230cbbb035ac36a4554370ef9b5e0d4bca2b0"} Dec 03 11:20:13 crc kubenswrapper[4756]: I1203 11:20:13.507770 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" event={"ID":"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2","Type":"ContainerStarted","Data":"702bb54c61aebe912d4f7639ec69238fe5a6762632a1119a5923c7c789b1f909"} Dec 03 11:20:13 crc kubenswrapper[4756]: I1203 11:20:13.533225 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" podStartSLOduration=2.143108852 podStartE2EDuration="2.533196571s" podCreationTimestamp="2025-12-03 11:20:11 +0000 UTC" firstStartedPulling="2025-12-03 11:20:12.506034462 +0000 UTC m=+1623.536035706" lastFinishedPulling="2025-12-03 11:20:12.896122191 +0000 UTC m=+1623.926123425" observedRunningTime="2025-12-03 11:20:13.526245064 +0000 UTC m=+1624.556246308" watchObservedRunningTime="2025-12-03 11:20:13.533196571 +0000 UTC m=+1624.563197825" Dec 03 11:20:16 crc kubenswrapper[4756]: I1203 11:20:16.234345 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:20:16 crc kubenswrapper[4756]: E1203 11:20:16.235200 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:20:28 crc kubenswrapper[4756]: I1203 11:20:28.234317 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:20:28 crc kubenswrapper[4756]: E1203 11:20:28.235564 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:20:43 crc kubenswrapper[4756]: I1203 11:20:43.234455 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:20:43 crc kubenswrapper[4756]: E1203 11:20:43.235684 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:20:47 crc kubenswrapper[4756]: I1203 11:20:47.426120 4756 scope.go:117] "RemoveContainer" containerID="49d48b50de161a7320a345f0e3295068f7200ceb79182a6f737c41f2039bedad" Dec 03 11:20:47 crc kubenswrapper[4756]: I1203 11:20:47.476863 4756 scope.go:117] "RemoveContainer" containerID="b3a966ac7cbf677e19902d18b0058cab669e59603d9d6ed8d886604b6dc55f40" Dec 03 11:20:47 crc kubenswrapper[4756]: I1203 11:20:47.529306 4756 scope.go:117] "RemoveContainer" containerID="2db6822e9c6bca34d936fb43bf28c3c1b2e13468ff8da6ab4a5c8c26e65647d6" Dec 03 11:20:47 crc kubenswrapper[4756]: I1203 11:20:47.556752 4756 scope.go:117] "RemoveContainer" containerID="a9a5e4493e4bb0868fea47f3040c94de7637bcc857abf28583f64b587b8a495e" Dec 03 11:20:47 crc kubenswrapper[4756]: I1203 11:20:47.614413 4756 scope.go:117] "RemoveContainer" containerID="b150ef69ebeb0df56329756301c54ab55c8f91cbbd17da2b3dbdd3d52f97008a" Dec 03 11:20:56 crc kubenswrapper[4756]: I1203 11:20:56.234058 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:20:56 crc kubenswrapper[4756]: E1203 11:20:56.235169 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:21:07 crc kubenswrapper[4756]: I1203 11:21:07.028789 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cw55g"] Dec 03 11:21:07 crc kubenswrapper[4756]: I1203 11:21:07.032932 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:07 crc kubenswrapper[4756]: I1203 11:21:07.059197 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cw55g"] Dec 03 11:21:07 crc kubenswrapper[4756]: I1203 11:21:07.176532 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-utilities\") pod \"certified-operators-cw55g\" (UID: \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\") " pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:07 crc kubenswrapper[4756]: I1203 11:21:07.176909 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgs5d\" (UniqueName: \"kubernetes.io/projected/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-kube-api-access-cgs5d\") pod \"certified-operators-cw55g\" (UID: \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\") " pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:07 crc kubenswrapper[4756]: I1203 11:21:07.177472 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-catalog-content\") pod \"certified-operators-cw55g\" (UID: \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\") " pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:07 crc kubenswrapper[4756]: I1203 11:21:07.279439 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-catalog-content\") pod \"certified-operators-cw55g\" (UID: \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\") " pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:07 crc kubenswrapper[4756]: I1203 11:21:07.280055 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-utilities\") pod \"certified-operators-cw55g\" (UID: \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\") " pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:07 crc kubenswrapper[4756]: I1203 11:21:07.280161 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-catalog-content\") pod \"certified-operators-cw55g\" (UID: \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\") " pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:07 crc kubenswrapper[4756]: I1203 11:21:07.280175 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgs5d\" (UniqueName: \"kubernetes.io/projected/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-kube-api-access-cgs5d\") pod \"certified-operators-cw55g\" (UID: \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\") " pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:07 crc kubenswrapper[4756]: I1203 11:21:07.280455 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-utilities\") pod \"certified-operators-cw55g\" (UID: \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\") " pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:07 crc kubenswrapper[4756]: I1203 11:21:07.303637 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgs5d\" (UniqueName: \"kubernetes.io/projected/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-kube-api-access-cgs5d\") pod \"certified-operators-cw55g\" (UID: \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\") " pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:07 crc kubenswrapper[4756]: I1203 11:21:07.374221 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:07 crc kubenswrapper[4756]: I1203 11:21:07.760145 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cw55g"] Dec 03 11:21:08 crc kubenswrapper[4756]: I1203 11:21:08.749535 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cw55g" event={"ID":"2b971c8e-04cd-463f-a1a6-f2bdae1337d8","Type":"ContainerStarted","Data":"2a797dcab48531aa7bf46ecee5ce8e962ffb73bf1b464e12be21837c9b2acca8"} Dec 03 11:21:09 crc kubenswrapper[4756]: I1203 11:21:09.246249 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:21:09 crc kubenswrapper[4756]: E1203 11:21:09.246491 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:21:09 crc kubenswrapper[4756]: I1203 11:21:09.764546 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cw55g" event={"ID":"2b971c8e-04cd-463f-a1a6-f2bdae1337d8","Type":"ContainerStarted","Data":"3fed8be20f3586ef254119c5ddd34e5b3fa9dd8ca4deb8c6df212c43c88b9fc7"} Dec 03 11:21:10 crc kubenswrapper[4756]: I1203 11:21:10.778558 4756 generic.go:334] "Generic (PLEG): container finished" podID="2b971c8e-04cd-463f-a1a6-f2bdae1337d8" containerID="3fed8be20f3586ef254119c5ddd34e5b3fa9dd8ca4deb8c6df212c43c88b9fc7" exitCode=0 Dec 03 11:21:10 crc kubenswrapper[4756]: I1203 11:21:10.778635 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cw55g" event={"ID":"2b971c8e-04cd-463f-a1a6-f2bdae1337d8","Type":"ContainerDied","Data":"3fed8be20f3586ef254119c5ddd34e5b3fa9dd8ca4deb8c6df212c43c88b9fc7"} Dec 03 11:21:10 crc kubenswrapper[4756]: I1203 11:21:10.781614 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 11:21:12 crc kubenswrapper[4756]: I1203 11:21:12.807678 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cw55g" event={"ID":"2b971c8e-04cd-463f-a1a6-f2bdae1337d8","Type":"ContainerStarted","Data":"b1ffcb350785ae1e0959fc192a4c3c777c77448d5af35fdc351e18b2737b977c"} Dec 03 11:21:13 crc kubenswrapper[4756]: I1203 11:21:13.822246 4756 generic.go:334] "Generic (PLEG): container finished" podID="2b971c8e-04cd-463f-a1a6-f2bdae1337d8" containerID="b1ffcb350785ae1e0959fc192a4c3c777c77448d5af35fdc351e18b2737b977c" exitCode=0 Dec 03 11:21:13 crc kubenswrapper[4756]: I1203 11:21:13.822316 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cw55g" event={"ID":"2b971c8e-04cd-463f-a1a6-f2bdae1337d8","Type":"ContainerDied","Data":"b1ffcb350785ae1e0959fc192a4c3c777c77448d5af35fdc351e18b2737b977c"} Dec 03 11:21:16 crc kubenswrapper[4756]: I1203 11:21:16.862987 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cw55g" event={"ID":"2b971c8e-04cd-463f-a1a6-f2bdae1337d8","Type":"ContainerStarted","Data":"f6d843084be870d3ab5ee42bc6a3912051af7b98ed627ed0be90f9d4644fa727"} Dec 03 11:21:16 crc kubenswrapper[4756]: I1203 11:21:16.895402 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cw55g" podStartSLOduration=4.670261755 podStartE2EDuration="9.895368786s" podCreationTimestamp="2025-12-03 11:21:07 +0000 UTC" firstStartedPulling="2025-12-03 11:21:10.781327611 +0000 UTC m=+1681.811328855" lastFinishedPulling="2025-12-03 11:21:16.006434642 +0000 UTC m=+1687.036435886" observedRunningTime="2025-12-03 11:21:16.885130327 +0000 UTC m=+1687.915131571" watchObservedRunningTime="2025-12-03 11:21:16.895368786 +0000 UTC m=+1687.925370030" Dec 03 11:21:17 crc kubenswrapper[4756]: I1203 11:21:17.374385 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:17 crc kubenswrapper[4756]: I1203 11:21:17.374872 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:18 crc kubenswrapper[4756]: I1203 11:21:18.429695 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cw55g" podUID="2b971c8e-04cd-463f-a1a6-f2bdae1337d8" containerName="registry-server" probeResult="failure" output=< Dec 03 11:21:18 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Dec 03 11:21:18 crc kubenswrapper[4756]: > Dec 03 11:21:21 crc kubenswrapper[4756]: I1203 11:21:21.235214 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:21:21 crc kubenswrapper[4756]: E1203 11:21:21.236302 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:21:27 crc kubenswrapper[4756]: I1203 11:21:27.427365 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:27 crc kubenswrapper[4756]: I1203 11:21:27.482438 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:27 crc kubenswrapper[4756]: I1203 11:21:27.676363 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cw55g"] Dec 03 11:21:28 crc kubenswrapper[4756]: I1203 11:21:28.994711 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cw55g" podUID="2b971c8e-04cd-463f-a1a6-f2bdae1337d8" containerName="registry-server" containerID="cri-o://f6d843084be870d3ab5ee42bc6a3912051af7b98ed627ed0be90f9d4644fa727" gracePeriod=2 Dec 03 11:21:29 crc kubenswrapper[4756]: I1203 11:21:29.503853 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:29 crc kubenswrapper[4756]: I1203 11:21:29.654446 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-utilities\") pod \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\" (UID: \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\") " Dec 03 11:21:29 crc kubenswrapper[4756]: I1203 11:21:29.654819 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgs5d\" (UniqueName: \"kubernetes.io/projected/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-kube-api-access-cgs5d\") pod \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\" (UID: \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\") " Dec 03 11:21:29 crc kubenswrapper[4756]: I1203 11:21:29.654972 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-catalog-content\") pod \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\" (UID: \"2b971c8e-04cd-463f-a1a6-f2bdae1337d8\") " Dec 03 11:21:29 crc kubenswrapper[4756]: I1203 11:21:29.656798 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-utilities" (OuterVolumeSpecName: "utilities") pod "2b971c8e-04cd-463f-a1a6-f2bdae1337d8" (UID: "2b971c8e-04cd-463f-a1a6-f2bdae1337d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:21:29 crc kubenswrapper[4756]: I1203 11:21:29.662629 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-kube-api-access-cgs5d" (OuterVolumeSpecName: "kube-api-access-cgs5d") pod "2b971c8e-04cd-463f-a1a6-f2bdae1337d8" (UID: "2b971c8e-04cd-463f-a1a6-f2bdae1337d8"). InnerVolumeSpecName "kube-api-access-cgs5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:21:29 crc kubenswrapper[4756]: I1203 11:21:29.709686 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b971c8e-04cd-463f-a1a6-f2bdae1337d8" (UID: "2b971c8e-04cd-463f-a1a6-f2bdae1337d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:21:29 crc kubenswrapper[4756]: I1203 11:21:29.758100 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:21:29 crc kubenswrapper[4756]: I1203 11:21:29.758149 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgs5d\" (UniqueName: \"kubernetes.io/projected/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-kube-api-access-cgs5d\") on node \"crc\" DevicePath \"\"" Dec 03 11:21:29 crc kubenswrapper[4756]: I1203 11:21:29.758163 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b971c8e-04cd-463f-a1a6-f2bdae1337d8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.008013 4756 generic.go:334] "Generic (PLEG): container finished" podID="2b971c8e-04cd-463f-a1a6-f2bdae1337d8" containerID="f6d843084be870d3ab5ee42bc6a3912051af7b98ed627ed0be90f9d4644fa727" exitCode=0 Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.008065 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cw55g" event={"ID":"2b971c8e-04cd-463f-a1a6-f2bdae1337d8","Type":"ContainerDied","Data":"f6d843084be870d3ab5ee42bc6a3912051af7b98ed627ed0be90f9d4644fa727"} Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.008102 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cw55g" event={"ID":"2b971c8e-04cd-463f-a1a6-f2bdae1337d8","Type":"ContainerDied","Data":"2a797dcab48531aa7bf46ecee5ce8e962ffb73bf1b464e12be21837c9b2acca8"} Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.008122 4756 scope.go:117] "RemoveContainer" containerID="f6d843084be870d3ab5ee42bc6a3912051af7b98ed627ed0be90f9d4644fa727" Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.008267 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cw55g" Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.052468 4756 scope.go:117] "RemoveContainer" containerID="b1ffcb350785ae1e0959fc192a4c3c777c77448d5af35fdc351e18b2737b977c" Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.057510 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cw55g"] Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.077929 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cw55g"] Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.084977 4756 scope.go:117] "RemoveContainer" containerID="3fed8be20f3586ef254119c5ddd34e5b3fa9dd8ca4deb8c6df212c43c88b9fc7" Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.141440 4756 scope.go:117] "RemoveContainer" containerID="f6d843084be870d3ab5ee42bc6a3912051af7b98ed627ed0be90f9d4644fa727" Dec 03 11:21:30 crc kubenswrapper[4756]: E1203 11:21:30.142237 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d843084be870d3ab5ee42bc6a3912051af7b98ed627ed0be90f9d4644fa727\": container with ID starting with f6d843084be870d3ab5ee42bc6a3912051af7b98ed627ed0be90f9d4644fa727 not found: ID does not exist" containerID="f6d843084be870d3ab5ee42bc6a3912051af7b98ed627ed0be90f9d4644fa727" Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.142377 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d843084be870d3ab5ee42bc6a3912051af7b98ed627ed0be90f9d4644fa727"} err="failed to get container status \"f6d843084be870d3ab5ee42bc6a3912051af7b98ed627ed0be90f9d4644fa727\": rpc error: code = NotFound desc = could not find container \"f6d843084be870d3ab5ee42bc6a3912051af7b98ed627ed0be90f9d4644fa727\": container with ID starting with f6d843084be870d3ab5ee42bc6a3912051af7b98ed627ed0be90f9d4644fa727 not found: ID does not exist" Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.142420 4756 scope.go:117] "RemoveContainer" containerID="b1ffcb350785ae1e0959fc192a4c3c777c77448d5af35fdc351e18b2737b977c" Dec 03 11:21:30 crc kubenswrapper[4756]: E1203 11:21:30.144088 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ffcb350785ae1e0959fc192a4c3c777c77448d5af35fdc351e18b2737b977c\": container with ID starting with b1ffcb350785ae1e0959fc192a4c3c777c77448d5af35fdc351e18b2737b977c not found: ID does not exist" containerID="b1ffcb350785ae1e0959fc192a4c3c777c77448d5af35fdc351e18b2737b977c" Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.144128 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ffcb350785ae1e0959fc192a4c3c777c77448d5af35fdc351e18b2737b977c"} err="failed to get container status \"b1ffcb350785ae1e0959fc192a4c3c777c77448d5af35fdc351e18b2737b977c\": rpc error: code = NotFound desc = could not find container \"b1ffcb350785ae1e0959fc192a4c3c777c77448d5af35fdc351e18b2737b977c\": container with ID starting with b1ffcb350785ae1e0959fc192a4c3c777c77448d5af35fdc351e18b2737b977c not found: ID does not exist" Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.144151 4756 scope.go:117] "RemoveContainer" containerID="3fed8be20f3586ef254119c5ddd34e5b3fa9dd8ca4deb8c6df212c43c88b9fc7" Dec 03 11:21:30 crc kubenswrapper[4756]: E1203 11:21:30.144719 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fed8be20f3586ef254119c5ddd34e5b3fa9dd8ca4deb8c6df212c43c88b9fc7\": container with ID starting with 3fed8be20f3586ef254119c5ddd34e5b3fa9dd8ca4deb8c6df212c43c88b9fc7 not found: ID does not exist" containerID="3fed8be20f3586ef254119c5ddd34e5b3fa9dd8ca4deb8c6df212c43c88b9fc7" Dec 03 11:21:30 crc kubenswrapper[4756]: I1203 11:21:30.144755 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fed8be20f3586ef254119c5ddd34e5b3fa9dd8ca4deb8c6df212c43c88b9fc7"} err="failed to get container status \"3fed8be20f3586ef254119c5ddd34e5b3fa9dd8ca4deb8c6df212c43c88b9fc7\": rpc error: code = NotFound desc = could not find container \"3fed8be20f3586ef254119c5ddd34e5b3fa9dd8ca4deb8c6df212c43c88b9fc7\": container with ID starting with 3fed8be20f3586ef254119c5ddd34e5b3fa9dd8ca4deb8c6df212c43c88b9fc7 not found: ID does not exist" Dec 03 11:21:31 crc kubenswrapper[4756]: I1203 11:21:31.247828 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b971c8e-04cd-463f-a1a6-f2bdae1337d8" path="/var/lib/kubelet/pods/2b971c8e-04cd-463f-a1a6-f2bdae1337d8/volumes" Dec 03 11:21:33 crc kubenswrapper[4756]: I1203 11:21:33.234677 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:21:33 crc kubenswrapper[4756]: E1203 11:21:33.235410 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:21:44 crc kubenswrapper[4756]: I1203 11:21:44.234356 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:21:44 crc kubenswrapper[4756]: E1203 11:21:44.236384 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:21:58 crc kubenswrapper[4756]: I1203 11:21:58.235119 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:21:58 crc kubenswrapper[4756]: E1203 11:21:58.236332 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:22:09 crc kubenswrapper[4756]: I1203 11:22:09.246878 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:22:09 crc kubenswrapper[4756]: E1203 11:22:09.247771 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:22:24 crc kubenswrapper[4756]: I1203 11:22:24.234723 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:22:24 crc kubenswrapper[4756]: E1203 11:22:24.235922 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:22:35 crc kubenswrapper[4756]: I1203 11:22:35.234856 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:22:35 crc kubenswrapper[4756]: E1203 11:22:35.236301 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:22:48 crc kubenswrapper[4756]: I1203 11:22:48.234740 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:22:48 crc kubenswrapper[4756]: E1203 11:22:48.235588 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:23:01 crc kubenswrapper[4756]: I1203 11:23:01.235095 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:23:01 crc kubenswrapper[4756]: E1203 11:23:01.236401 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:23:11 crc kubenswrapper[4756]: I1203 11:23:11.054199 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-kl6wn"] Dec 03 11:23:11 crc kubenswrapper[4756]: I1203 11:23:11.067799 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-kl6wn"] Dec 03 11:23:11 crc kubenswrapper[4756]: I1203 11:23:11.086104 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a6da-account-create-update-5nsn7"] Dec 03 11:23:11 crc kubenswrapper[4756]: I1203 11:23:11.100610 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a6da-account-create-update-5nsn7"] Dec 03 11:23:11 crc kubenswrapper[4756]: I1203 11:23:11.249865 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="535da1f9-45e8-4189-9e84-0c5866c0d612" path="/var/lib/kubelet/pods/535da1f9-45e8-4189-9e84-0c5866c0d612/volumes" Dec 03 11:23:11 crc kubenswrapper[4756]: I1203 11:23:11.250820 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f" path="/var/lib/kubelet/pods/6f09edd1-e0a0-42aa-b0e0-c23989ad7b3f/volumes" Dec 03 11:23:14 crc kubenswrapper[4756]: I1203 11:23:14.206142 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-srkv7"] Dec 03 11:23:14 crc kubenswrapper[4756]: I1203 11:23:14.220393 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-srkv7"] Dec 03 11:23:14 crc kubenswrapper[4756]: I1203 11:23:14.234780 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:23:14 crc kubenswrapper[4756]: E1203 11:23:14.235166 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:23:15 crc kubenswrapper[4756]: I1203 11:23:15.055549 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b30f-account-create-update-c926r"] Dec 03 11:23:15 crc kubenswrapper[4756]: I1203 11:23:15.066944 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-tjhsk"] Dec 03 11:23:15 crc kubenswrapper[4756]: I1203 11:23:15.079880 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a266-account-create-update-x49nv"] Dec 03 11:23:15 crc kubenswrapper[4756]: I1203 11:23:15.090198 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-b30f-account-create-update-c926r"] Dec 03 11:23:15 crc kubenswrapper[4756]: I1203 11:23:15.102273 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-tjhsk"] Dec 03 11:23:15 crc kubenswrapper[4756]: I1203 11:23:15.113153 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a266-account-create-update-x49nv"] Dec 03 11:23:15 crc kubenswrapper[4756]: I1203 11:23:15.249995 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44226660-d2e9-4c13-bb9c-e2113bc0fcec" path="/var/lib/kubelet/pods/44226660-d2e9-4c13-bb9c-e2113bc0fcec/volumes" Dec 03 11:23:15 crc kubenswrapper[4756]: I1203 11:23:15.251130 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec32d41-0b95-4035-b092-aa00a4c57129" path="/var/lib/kubelet/pods/aec32d41-0b95-4035-b092-aa00a4c57129/volumes" Dec 03 11:23:15 crc kubenswrapper[4756]: I1203 11:23:15.252145 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e512a844-d4d9-4bc3-88bb-a0f11fb31821" path="/var/lib/kubelet/pods/e512a844-d4d9-4bc3-88bb-a0f11fb31821/volumes" Dec 03 11:23:15 crc kubenswrapper[4756]: I1203 11:23:15.253150 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc6158c-bc38-42b6-9fbc-2bd15e907f48" path="/var/lib/kubelet/pods/fcc6158c-bc38-42b6-9fbc-2bd15e907f48/volumes" Dec 03 11:23:26 crc kubenswrapper[4756]: I1203 11:23:26.234746 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:23:26 crc kubenswrapper[4756]: E1203 11:23:26.236370 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:23:38 crc kubenswrapper[4756]: I1203 11:23:38.054786 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jqzxs"] Dec 03 11:23:38 crc kubenswrapper[4756]: I1203 11:23:38.066868 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jqzxs"] Dec 03 11:23:38 crc kubenswrapper[4756]: I1203 11:23:38.712379 4756 generic.go:334] "Generic (PLEG): container finished" podID="5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2" containerID="93b3e9a168914981798a8ff6360230cbbb035ac36a4554370ef9b5e0d4bca2b0" exitCode=0 Dec 03 11:23:38 crc kubenswrapper[4756]: I1203 11:23:38.712484 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" event={"ID":"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2","Type":"ContainerDied","Data":"93b3e9a168914981798a8ff6360230cbbb035ac36a4554370ef9b5e0d4bca2b0"} Dec 03 11:23:39 crc kubenswrapper[4756]: I1203 11:23:39.429210 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dfd4e44-d633-40a6-a398-1ce5e264393a" path="/var/lib/kubelet/pods/9dfd4e44-d633-40a6-a398-1ce5e264393a/volumes" Dec 03 11:23:39 crc kubenswrapper[4756]: I1203 11:23:39.462130 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:23:39 crc kubenswrapper[4756]: E1203 11:23:39.462529 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.217350 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.232344 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-ssh-key\") pod \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.232580 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-inventory\") pod \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.232615 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-bootstrap-combined-ca-bundle\") pod \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.233784 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87ngk\" (UniqueName: \"kubernetes.io/projected/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-kube-api-access-87ngk\") pod \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\" (UID: \"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2\") " Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.245654 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-kube-api-access-87ngk" (OuterVolumeSpecName: "kube-api-access-87ngk") pod "5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2" (UID: "5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2"). InnerVolumeSpecName "kube-api-access-87ngk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.246205 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2" (UID: "5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.286343 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2" (UID: "5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.289602 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-inventory" (OuterVolumeSpecName: "inventory") pod "5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2" (UID: "5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.335705 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87ngk\" (UniqueName: \"kubernetes.io/projected/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-kube-api-access-87ngk\") on node \"crc\" DevicePath \"\"" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.335756 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.335766 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.335777 4756 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.736786 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" event={"ID":"5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2","Type":"ContainerDied","Data":"702bb54c61aebe912d4f7639ec69238fe5a6762632a1119a5923c7c789b1f909"} Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.736841 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="702bb54c61aebe912d4f7639ec69238fe5a6762632a1119a5923c7c789b1f909" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.736920 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.907456 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54"] Dec 03 11:23:40 crc kubenswrapper[4756]: E1203 11:23:40.908806 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b971c8e-04cd-463f-a1a6-f2bdae1337d8" containerName="extract-content" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.908936 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b971c8e-04cd-463f-a1a6-f2bdae1337d8" containerName="extract-content" Dec 03 11:23:40 crc kubenswrapper[4756]: E1203 11:23:40.909041 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b971c8e-04cd-463f-a1a6-f2bdae1337d8" containerName="extract-utilities" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.909102 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b971c8e-04cd-463f-a1a6-f2bdae1337d8" containerName="extract-utilities" Dec 03 11:23:40 crc kubenswrapper[4756]: E1203 11:23:40.909158 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.909217 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 11:23:40 crc kubenswrapper[4756]: E1203 11:23:40.909295 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b971c8e-04cd-463f-a1a6-f2bdae1337d8" containerName="registry-server" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.909355 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b971c8e-04cd-463f-a1a6-f2bdae1337d8" containerName="registry-server" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.909634 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b971c8e-04cd-463f-a1a6-f2bdae1337d8" containerName="registry-server" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.909711 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.910698 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.919462 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.919626 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.937136 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.937594 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.950440 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9caa0fb-3dd9-4219-9657-e61712c336e8-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6jd54\" (UID: \"e9caa0fb-3dd9-4219-9657-e61712c336e8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.950589 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9caa0fb-3dd9-4219-9657-e61712c336e8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6jd54\" (UID: \"e9caa0fb-3dd9-4219-9657-e61712c336e8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.950661 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qgld\" (UniqueName: \"kubernetes.io/projected/e9caa0fb-3dd9-4219-9657-e61712c336e8-kube-api-access-8qgld\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6jd54\" (UID: \"e9caa0fb-3dd9-4219-9657-e61712c336e8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" Dec 03 11:23:40 crc kubenswrapper[4756]: I1203 11:23:40.966049 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54"] Dec 03 11:23:41 crc kubenswrapper[4756]: I1203 11:23:41.052855 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9caa0fb-3dd9-4219-9657-e61712c336e8-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6jd54\" (UID: \"e9caa0fb-3dd9-4219-9657-e61712c336e8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" Dec 03 11:23:41 crc kubenswrapper[4756]: I1203 11:23:41.052975 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9caa0fb-3dd9-4219-9657-e61712c336e8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6jd54\" (UID: \"e9caa0fb-3dd9-4219-9657-e61712c336e8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" Dec 03 11:23:41 crc kubenswrapper[4756]: I1203 11:23:41.053021 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qgld\" (UniqueName: \"kubernetes.io/projected/e9caa0fb-3dd9-4219-9657-e61712c336e8-kube-api-access-8qgld\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6jd54\" (UID: \"e9caa0fb-3dd9-4219-9657-e61712c336e8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" Dec 03 11:23:41 crc kubenswrapper[4756]: I1203 11:23:41.064629 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9caa0fb-3dd9-4219-9657-e61712c336e8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6jd54\" (UID: \"e9caa0fb-3dd9-4219-9657-e61712c336e8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" Dec 03 11:23:41 crc kubenswrapper[4756]: I1203 11:23:41.083840 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9caa0fb-3dd9-4219-9657-e61712c336e8-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6jd54\" (UID: \"e9caa0fb-3dd9-4219-9657-e61712c336e8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" Dec 03 11:23:41 crc kubenswrapper[4756]: I1203 11:23:41.099251 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qgld\" (UniqueName: \"kubernetes.io/projected/e9caa0fb-3dd9-4219-9657-e61712c336e8-kube-api-access-8qgld\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6jd54\" (UID: \"e9caa0fb-3dd9-4219-9657-e61712c336e8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" Dec 03 11:23:41 crc kubenswrapper[4756]: I1203 11:23:41.232629 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" Dec 03 11:23:42 crc kubenswrapper[4756]: I1203 11:23:42.048240 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54"] Dec 03 11:23:42 crc kubenswrapper[4756]: I1203 11:23:42.759908 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" event={"ID":"e9caa0fb-3dd9-4219-9657-e61712c336e8","Type":"ContainerStarted","Data":"5bf77567611a7c13e3dfe3ab1209b3979c547e1d26a99118fb19f18a559f5dba"} Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.082785 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-vn6kk"] Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.090748 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-nnw8s"] Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.134989 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-538b-account-create-update-ff9pf"] Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.154591 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4964-account-create-update-hzmkz"] Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.166557 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-vn6kk"] Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.176105 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-nnw8s"] Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.187855 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fc3a-account-create-update-jqp99"] Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.216025 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-538b-account-create-update-ff9pf"] Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.225431 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4964-account-create-update-hzmkz"] Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.258641 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1" path="/var/lib/kubelet/pods/3e7e60ab-3bd7-44bc-85c3-a44eeaf171d1/volumes" Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.260095 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60c9450f-ddea-4125-97b0-1099d718ec93" path="/var/lib/kubelet/pods/60c9450f-ddea-4125-97b0-1099d718ec93/volumes" Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.260916 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1877248-2c59-4f2f-b6a7-325fbac239a0" path="/var/lib/kubelet/pods/e1877248-2c59-4f2f-b6a7-325fbac239a0/volumes" Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.261805 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f637997b-3cd4-4576-abd8-385f1d6484f5" path="/var/lib/kubelet/pods/f637997b-3cd4-4576-abd8-385f1d6484f5/volumes" Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.263805 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-fc3a-account-create-update-jqp99"] Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.772336 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" event={"ID":"e9caa0fb-3dd9-4219-9657-e61712c336e8","Type":"ContainerStarted","Data":"e5bb28e1eb953983cc63b0a6bc5c9d24e240084dbf9d730bde43cfcc75d953ad"} Dec 03 11:23:43 crc kubenswrapper[4756]: I1203 11:23:43.807941 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" podStartSLOduration=2.848457601 podStartE2EDuration="3.80790858s" podCreationTimestamp="2025-12-03 11:23:40 +0000 UTC" firstStartedPulling="2025-12-03 11:23:42.069160491 +0000 UTC m=+1833.099161735" lastFinishedPulling="2025-12-03 11:23:43.02861147 +0000 UTC m=+1834.058612714" observedRunningTime="2025-12-03 11:23:43.797127544 +0000 UTC m=+1834.827128788" watchObservedRunningTime="2025-12-03 11:23:43.80790858 +0000 UTC m=+1834.837909824" Dec 03 11:23:45 crc kubenswrapper[4756]: I1203 11:23:45.248924 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="604e757c-4c85-4eab-ac45-12c3236c3d1b" path="/var/lib/kubelet/pods/604e757c-4c85-4eab-ac45-12c3236c3d1b/volumes" Dec 03 11:23:47 crc kubenswrapper[4756]: I1203 11:23:47.784439 4756 scope.go:117] "RemoveContainer" containerID="be9f938b5da0a9c0f955e22cb1b0a16dca89a22017090c35aaa0d32968cd541d" Dec 03 11:23:47 crc kubenswrapper[4756]: I1203 11:23:47.821929 4756 scope.go:117] "RemoveContainer" containerID="e3c729765b8418f27c72e8639035bd1c5abf0f3e39683649341fea710d3e797e" Dec 03 11:23:47 crc kubenswrapper[4756]: I1203 11:23:47.859452 4756 scope.go:117] "RemoveContainer" containerID="23059670a483d42a68efbefa5cd077723e05d62dd46ee966723cc77ac3cd5283" Dec 03 11:23:47 crc kubenswrapper[4756]: I1203 11:23:47.917374 4756 scope.go:117] "RemoveContainer" containerID="28b29dfbec1a6402dff7b1bde8459389a0e6f76ddc9cb2799dd16ee6fead5050" Dec 03 11:23:47 crc kubenswrapper[4756]: I1203 11:23:47.983041 4756 scope.go:117] "RemoveContainer" containerID="c698ad1d5de9deafa2502775f4994204654825fc336b97240d6676eb17d3dfce" Dec 03 11:23:48 crc kubenswrapper[4756]: I1203 11:23:48.079289 4756 scope.go:117] "RemoveContainer" containerID="86e3b5ec1ea179c81c0e2ccfb3cc89a3e91e4201e9d3428062281c7db20c0cba" Dec 03 11:23:48 crc kubenswrapper[4756]: I1203 11:23:48.122175 4756 scope.go:117] "RemoveContainer" containerID="72284c79d8ff3b284c6c79e7c1fe10c8d7ce02be8ecc67e8868d635c35c20fa7" Dec 03 11:23:48 crc kubenswrapper[4756]: I1203 11:23:48.174237 4756 scope.go:117] "RemoveContainer" containerID="cf1945d2c9fef28d1709449efc0393f253159881b44f102481ba7fdadfe02d64" Dec 03 11:23:48 crc kubenswrapper[4756]: I1203 11:23:48.208693 4756 scope.go:117] "RemoveContainer" containerID="cfff8ca639b1b655b0282f0a8213e104e3656815007e3478757393abbfa685d2" Dec 03 11:23:48 crc kubenswrapper[4756]: I1203 11:23:48.254382 4756 scope.go:117] "RemoveContainer" containerID="b585e9d659ec765c93fb596e3fc6509f8d7187e70906f16bd474177bdfcd25e0" Dec 03 11:23:48 crc kubenswrapper[4756]: I1203 11:23:48.280520 4756 scope.go:117] "RemoveContainer" containerID="aa15435aa579c192c0725626f9ac8b92b6782c9128e0b3a0ce477ac15e2830cc" Dec 03 11:23:48 crc kubenswrapper[4756]: I1203 11:23:48.317500 4756 scope.go:117] "RemoveContainer" containerID="c6f98f6d8beb1822ad45047228636f7b1f1745548851476622f17821713c4ba6" Dec 03 11:23:48 crc kubenswrapper[4756]: I1203 11:23:48.350642 4756 scope.go:117] "RemoveContainer" containerID="050316102fe570a18862dbfb8ac43cc813c9b905e7a2f0ae973c9723f06cc3fa" Dec 03 11:23:48 crc kubenswrapper[4756]: I1203 11:23:48.377086 4756 scope.go:117] "RemoveContainer" containerID="cfb9e6ced005cec1574a8e9f5edc9f907cf9ced9a01eb7398a35e00f64842c5c" Dec 03 11:23:50 crc kubenswrapper[4756]: I1203 11:23:50.234621 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:23:50 crc kubenswrapper[4756]: E1203 11:23:50.235448 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:23:59 crc kubenswrapper[4756]: I1203 11:23:59.048128 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-pxhw5"] Dec 03 11:23:59 crc kubenswrapper[4756]: I1203 11:23:59.062533 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-pxhw5"] Dec 03 11:23:59 crc kubenswrapper[4756]: I1203 11:23:59.252470 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f676418d-e449-4e99-a7f7-f1f0d89590fe" path="/var/lib/kubelet/pods/f676418d-e449-4e99-a7f7-f1f0d89590fe/volumes" Dec 03 11:24:01 crc kubenswrapper[4756]: I1203 11:24:01.234364 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:24:01 crc kubenswrapper[4756]: E1203 11:24:01.234929 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:24:16 crc kubenswrapper[4756]: I1203 11:24:16.235355 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:24:16 crc kubenswrapper[4756]: E1203 11:24:16.238121 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:24:30 crc kubenswrapper[4756]: I1203 11:24:30.234037 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:24:31 crc kubenswrapper[4756]: I1203 11:24:31.431684 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"bee67564a166c569d665ca8e6192a5b98d0e4116e06012f6ecd98a3f28ada620"} Dec 03 11:24:34 crc kubenswrapper[4756]: I1203 11:24:34.054530 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-7bvmm"] Dec 03 11:24:34 crc kubenswrapper[4756]: I1203 11:24:34.065902 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-7bvmm"] Dec 03 11:24:35 crc kubenswrapper[4756]: I1203 11:24:35.253695 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07a714a5-c627-43b1-8bc1-85e157c25fb0" path="/var/lib/kubelet/pods/07a714a5-c627-43b1-8bc1-85e157c25fb0/volumes" Dec 03 11:24:36 crc kubenswrapper[4756]: I1203 11:24:36.044186 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-sxfpk"] Dec 03 11:24:36 crc kubenswrapper[4756]: I1203 11:24:36.054566 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-sxfpk"] Dec 03 11:24:37 crc kubenswrapper[4756]: I1203 11:24:37.247192 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aee66879-3546-4e2b-9737-ddb96741650f" path="/var/lib/kubelet/pods/aee66879-3546-4e2b-9737-ddb96741650f/volumes" Dec 03 11:24:48 crc kubenswrapper[4756]: I1203 11:24:48.740416 4756 scope.go:117] "RemoveContainer" containerID="4fbf1df278c1ea45a4be40f36bf06c7af65b37db9076a119e097db33133c40b7" Dec 03 11:24:48 crc kubenswrapper[4756]: I1203 11:24:48.794103 4756 scope.go:117] "RemoveContainer" containerID="00ac1e2785ecdf6cd26b2d3b4c4a9c1203e04b4868e378b2521588c2baa189bb" Dec 03 11:24:48 crc kubenswrapper[4756]: I1203 11:24:48.851724 4756 scope.go:117] "RemoveContainer" containerID="c3ba64230151da45c102b293c7a8c1af0adf91a4a5ec8fd83d6fbdff8791ff39" Dec 03 11:24:49 crc kubenswrapper[4756]: I1203 11:24:49.044990 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-fxdjr"] Dec 03 11:24:49 crc kubenswrapper[4756]: I1203 11:24:49.058539 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-lr48q"] Dec 03 11:24:49 crc kubenswrapper[4756]: I1203 11:24:49.073629 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-fxdjr"] Dec 03 11:24:49 crc kubenswrapper[4756]: I1203 11:24:49.087633 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-qvbgz"] Dec 03 11:24:49 crc kubenswrapper[4756]: I1203 11:24:49.099370 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-qvbgz"] Dec 03 11:24:49 crc kubenswrapper[4756]: I1203 11:24:49.110668 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-lr48q"] Dec 03 11:24:49 crc kubenswrapper[4756]: I1203 11:24:49.249908 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7887343b-04ca-42bf-b260-a2d02845676c" path="/var/lib/kubelet/pods/7887343b-04ca-42bf-b260-a2d02845676c/volumes" Dec 03 11:24:49 crc kubenswrapper[4756]: I1203 11:24:49.250595 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="818fc868-fe09-4a91-aab2-91f11bac7386" path="/var/lib/kubelet/pods/818fc868-fe09-4a91-aab2-91f11bac7386/volumes" Dec 03 11:24:49 crc kubenswrapper[4756]: I1203 11:24:49.251294 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ceb74cf-6023-4536-bc04-6667b5f48967" path="/var/lib/kubelet/pods/9ceb74cf-6023-4536-bc04-6667b5f48967/volumes" Dec 03 11:25:08 crc kubenswrapper[4756]: I1203 11:25:08.046294 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7fh72"] Dec 03 11:25:08 crc kubenswrapper[4756]: I1203 11:25:08.062255 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7fh72"] Dec 03 11:25:09 crc kubenswrapper[4756]: I1203 11:25:09.254652 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa7e078c-dfed-40c1-ac1c-d9db28aa9d96" path="/var/lib/kubelet/pods/aa7e078c-dfed-40c1-ac1c-d9db28aa9d96/volumes" Dec 03 11:25:48 crc kubenswrapper[4756]: I1203 11:25:48.993253 4756 scope.go:117] "RemoveContainer" containerID="2cc4f65ff8a9b938578d5f21a8239fa46348a808451dcc67ad254c18a485195d" Dec 03 11:25:49 crc kubenswrapper[4756]: I1203 11:25:49.127118 4756 scope.go:117] "RemoveContainer" containerID="abbe059b6ecbdb73f056cec8a058014916bca8486c8009fd1dcedcac12a9fa25" Dec 03 11:25:49 crc kubenswrapper[4756]: I1203 11:25:49.197310 4756 scope.go:117] "RemoveContainer" containerID="42f20f45461cdc0631f64c06de988d4e72a079e9a399f06bd35c3338fbe986af" Dec 03 11:25:49 crc kubenswrapper[4756]: I1203 11:25:49.276934 4756 scope.go:117] "RemoveContainer" containerID="340472efc057804b16a85d014e55b8a5962a0b566d9e5ea60c23b8d325b23dea" Dec 03 11:25:52 crc kubenswrapper[4756]: I1203 11:25:52.340539 4756 generic.go:334] "Generic (PLEG): container finished" podID="e9caa0fb-3dd9-4219-9657-e61712c336e8" containerID="e5bb28e1eb953983cc63b0a6bc5c9d24e240084dbf9d730bde43cfcc75d953ad" exitCode=0 Dec 03 11:25:52 crc kubenswrapper[4756]: I1203 11:25:52.341099 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" event={"ID":"e9caa0fb-3dd9-4219-9657-e61712c336e8","Type":"ContainerDied","Data":"e5bb28e1eb953983cc63b0a6bc5c9d24e240084dbf9d730bde43cfcc75d953ad"} Dec 03 11:25:53 crc kubenswrapper[4756]: I1203 11:25:53.908806 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.095748 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9caa0fb-3dd9-4219-9657-e61712c336e8-inventory\") pod \"e9caa0fb-3dd9-4219-9657-e61712c336e8\" (UID: \"e9caa0fb-3dd9-4219-9657-e61712c336e8\") " Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.095886 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qgld\" (UniqueName: \"kubernetes.io/projected/e9caa0fb-3dd9-4219-9657-e61712c336e8-kube-api-access-8qgld\") pod \"e9caa0fb-3dd9-4219-9657-e61712c336e8\" (UID: \"e9caa0fb-3dd9-4219-9657-e61712c336e8\") " Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.097268 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9caa0fb-3dd9-4219-9657-e61712c336e8-ssh-key\") pod \"e9caa0fb-3dd9-4219-9657-e61712c336e8\" (UID: \"e9caa0fb-3dd9-4219-9657-e61712c336e8\") " Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.109149 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9caa0fb-3dd9-4219-9657-e61712c336e8-kube-api-access-8qgld" (OuterVolumeSpecName: "kube-api-access-8qgld") pod "e9caa0fb-3dd9-4219-9657-e61712c336e8" (UID: "e9caa0fb-3dd9-4219-9657-e61712c336e8"). InnerVolumeSpecName "kube-api-access-8qgld". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.131199 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9caa0fb-3dd9-4219-9657-e61712c336e8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e9caa0fb-3dd9-4219-9657-e61712c336e8" (UID: "e9caa0fb-3dd9-4219-9657-e61712c336e8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.134612 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9caa0fb-3dd9-4219-9657-e61712c336e8-inventory" (OuterVolumeSpecName: "inventory") pod "e9caa0fb-3dd9-4219-9657-e61712c336e8" (UID: "e9caa0fb-3dd9-4219-9657-e61712c336e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.199730 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9caa0fb-3dd9-4219-9657-e61712c336e8-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.199779 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9caa0fb-3dd9-4219-9657-e61712c336e8-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.199794 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qgld\" (UniqueName: \"kubernetes.io/projected/e9caa0fb-3dd9-4219-9657-e61712c336e8-kube-api-access-8qgld\") on node \"crc\" DevicePath \"\"" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.366305 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" event={"ID":"e9caa0fb-3dd9-4219-9657-e61712c336e8","Type":"ContainerDied","Data":"5bf77567611a7c13e3dfe3ab1209b3979c547e1d26a99118fb19f18a559f5dba"} Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.366725 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bf77567611a7c13e3dfe3ab1209b3979c547e1d26a99118fb19f18a559f5dba" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.366372 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6jd54" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.475334 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm"] Dec 03 11:25:54 crc kubenswrapper[4756]: E1203 11:25:54.475810 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9caa0fb-3dd9-4219-9657-e61712c336e8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.475831 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9caa0fb-3dd9-4219-9657-e61712c336e8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.476045 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9caa0fb-3dd9-4219-9657-e61712c336e8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.477021 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.480559 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.481084 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.484704 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.488032 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.489332 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm"] Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.595226 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13546056-cc4c-4f52-81ab-79909380facb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-662qm\" (UID: \"13546056-cc4c-4f52-81ab-79909380facb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.595300 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13546056-cc4c-4f52-81ab-79909380facb-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-662qm\" (UID: \"13546056-cc4c-4f52-81ab-79909380facb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.595788 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfnfq\" (UniqueName: \"kubernetes.io/projected/13546056-cc4c-4f52-81ab-79909380facb-kube-api-access-pfnfq\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-662qm\" (UID: \"13546056-cc4c-4f52-81ab-79909380facb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.698660 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfnfq\" (UniqueName: \"kubernetes.io/projected/13546056-cc4c-4f52-81ab-79909380facb-kube-api-access-pfnfq\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-662qm\" (UID: \"13546056-cc4c-4f52-81ab-79909380facb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.699300 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13546056-cc4c-4f52-81ab-79909380facb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-662qm\" (UID: \"13546056-cc4c-4f52-81ab-79909380facb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.699328 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13546056-cc4c-4f52-81ab-79909380facb-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-662qm\" (UID: \"13546056-cc4c-4f52-81ab-79909380facb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.704164 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13546056-cc4c-4f52-81ab-79909380facb-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-662qm\" (UID: \"13546056-cc4c-4f52-81ab-79909380facb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.705335 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13546056-cc4c-4f52-81ab-79909380facb-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-662qm\" (UID: \"13546056-cc4c-4f52-81ab-79909380facb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.717564 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfnfq\" (UniqueName: \"kubernetes.io/projected/13546056-cc4c-4f52-81ab-79909380facb-kube-api-access-pfnfq\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-662qm\" (UID: \"13546056-cc4c-4f52-81ab-79909380facb\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" Dec 03 11:25:54 crc kubenswrapper[4756]: I1203 11:25:54.802178 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" Dec 03 11:25:55 crc kubenswrapper[4756]: I1203 11:25:55.409992 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm"] Dec 03 11:25:56 crc kubenswrapper[4756]: I1203 11:25:56.056099 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-8fddf"] Dec 03 11:25:56 crc kubenswrapper[4756]: I1203 11:25:56.069142 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-8fddf"] Dec 03 11:25:56 crc kubenswrapper[4756]: I1203 11:25:56.389591 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" event={"ID":"13546056-cc4c-4f52-81ab-79909380facb","Type":"ContainerStarted","Data":"b594314cf1a8e1c37ffa535f58abb6f93f56291947273df5c865921d3f79b0df"} Dec 03 11:25:57 crc kubenswrapper[4756]: I1203 11:25:57.037022 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-8p42c"] Dec 03 11:25:57 crc kubenswrapper[4756]: I1203 11:25:57.047021 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-8p42c"] Dec 03 11:25:57 crc kubenswrapper[4756]: I1203 11:25:57.249320 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bf87416-9370-4447-8d8d-5d303f5126ae" path="/var/lib/kubelet/pods/2bf87416-9370-4447-8d8d-5d303f5126ae/volumes" Dec 03 11:25:57 crc kubenswrapper[4756]: I1203 11:25:57.250294 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4962e497-d702-41df-b7b7-a9a873359aa3" path="/var/lib/kubelet/pods/4962e497-d702-41df-b7b7-a9a873359aa3/volumes" Dec 03 11:25:57 crc kubenswrapper[4756]: I1203 11:25:57.402767 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" event={"ID":"13546056-cc4c-4f52-81ab-79909380facb","Type":"ContainerStarted","Data":"f809dc7d46af7198bd2f4e73110a190b5301626cc5bfd8e26f754cca7ba48b0f"} Dec 03 11:25:57 crc kubenswrapper[4756]: I1203 11:25:57.435145 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" podStartSLOduration=2.218071174 podStartE2EDuration="3.435124039s" podCreationTimestamp="2025-12-03 11:25:54 +0000 UTC" firstStartedPulling="2025-12-03 11:25:55.419831509 +0000 UTC m=+1966.449832753" lastFinishedPulling="2025-12-03 11:25:56.636884374 +0000 UTC m=+1967.666885618" observedRunningTime="2025-12-03 11:25:57.426906601 +0000 UTC m=+1968.456907855" watchObservedRunningTime="2025-12-03 11:25:57.435124039 +0000 UTC m=+1968.465125283" Dec 03 11:25:58 crc kubenswrapper[4756]: I1203 11:25:58.051249 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-lgdzl"] Dec 03 11:25:58 crc kubenswrapper[4756]: I1203 11:25:58.069567 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-lgdzl"] Dec 03 11:25:59 crc kubenswrapper[4756]: I1203 11:25:59.044913 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-aad7-account-create-update-9hdq8"] Dec 03 11:25:59 crc kubenswrapper[4756]: I1203 11:25:59.058784 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-99a3-account-create-update-mlb88"] Dec 03 11:25:59 crc kubenswrapper[4756]: I1203 11:25:59.068283 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-aad7-account-create-update-9hdq8"] Dec 03 11:25:59 crc kubenswrapper[4756]: I1203 11:25:59.106827 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-149d-account-create-update-2gcn6"] Dec 03 11:25:59 crc kubenswrapper[4756]: I1203 11:25:59.118492 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-99a3-account-create-update-mlb88"] Dec 03 11:25:59 crc kubenswrapper[4756]: I1203 11:25:59.128535 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-149d-account-create-update-2gcn6"] Dec 03 11:25:59 crc kubenswrapper[4756]: I1203 11:25:59.248382 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a9719b-6d2e-4f77-8a55-0bfb9b29d29a" path="/var/lib/kubelet/pods/59a9719b-6d2e-4f77-8a55-0bfb9b29d29a/volumes" Dec 03 11:25:59 crc kubenswrapper[4756]: I1203 11:25:59.249262 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="875a8b1b-844e-435e-8656-4fbef59b74af" path="/var/lib/kubelet/pods/875a8b1b-844e-435e-8656-4fbef59b74af/volumes" Dec 03 11:25:59 crc kubenswrapper[4756]: I1203 11:25:59.249992 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00eb590-84e6-4087-b370-226af97b869a" path="/var/lib/kubelet/pods/a00eb590-84e6-4087-b370-226af97b869a/volumes" Dec 03 11:25:59 crc kubenswrapper[4756]: I1203 11:25:59.250709 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c" path="/var/lib/kubelet/pods/a3e4fb9a-1e78-4c8a-b855-6d1bb5c0f38c/volumes" Dec 03 11:26:31 crc kubenswrapper[4756]: I1203 11:26:31.080619 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nq8bd"] Dec 03 11:26:31 crc kubenswrapper[4756]: I1203 11:26:31.098279 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nq8bd"] Dec 03 11:26:31 crc kubenswrapper[4756]: I1203 11:26:31.245561 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33" path="/var/lib/kubelet/pods/2e92b6c2-b8a6-45f0-9bc8-2575d2aa1f33/volumes" Dec 03 11:26:49 crc kubenswrapper[4756]: I1203 11:26:49.426228 4756 scope.go:117] "RemoveContainer" containerID="a6a2090366107a5298e564b6bef95bb3c71dba499c23e2820593240dc8ef407c" Dec 03 11:26:49 crc kubenswrapper[4756]: I1203 11:26:49.490239 4756 scope.go:117] "RemoveContainer" containerID="6acedcd053e8ffafae6e69be9ee8202144a19cc0252349e48af67fecdf0d3584" Dec 03 11:26:49 crc kubenswrapper[4756]: I1203 11:26:49.518355 4756 scope.go:117] "RemoveContainer" containerID="faa409fa4b5d5e8b5263ddee5a8a43765103db243621d2699afcce694d3d82a9" Dec 03 11:26:49 crc kubenswrapper[4756]: I1203 11:26:49.567011 4756 scope.go:117] "RemoveContainer" containerID="7f826bf7b3ab2a57f705974b98d0770dbacbbcce3d3ea1d4f343433681d657f5" Dec 03 11:26:49 crc kubenswrapper[4756]: I1203 11:26:49.617596 4756 scope.go:117] "RemoveContainer" containerID="35b01d66dafca6aa51157286bbe05779718ecdac89e337a88a31c6c4ced939ec" Dec 03 11:26:49 crc kubenswrapper[4756]: I1203 11:26:49.686656 4756 scope.go:117] "RemoveContainer" containerID="d750690109fc9e17b0952f0e84617875a1110328cff2cad951b5d8b66a8aa14f" Dec 03 11:26:49 crc kubenswrapper[4756]: I1203 11:26:49.731627 4756 scope.go:117] "RemoveContainer" containerID="c653ce976c069c796fd188682eb0ea90478aeec3290ba71cb6b2b53343724ca4" Dec 03 11:26:52 crc kubenswrapper[4756]: I1203 11:26:52.607306 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:26:52 crc kubenswrapper[4756]: I1203 11:26:52.607828 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:26:57 crc kubenswrapper[4756]: I1203 11:26:57.038521 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-6pnv6"] Dec 03 11:26:57 crc kubenswrapper[4756]: I1203 11:26:57.052760 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-6pnv6"] Dec 03 11:26:57 crc kubenswrapper[4756]: I1203 11:26:57.247791 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de2c45dd-c649-4e7f-bdcb-259bbc663d8c" path="/var/lib/kubelet/pods/de2c45dd-c649-4e7f-bdcb-259bbc663d8c/volumes" Dec 03 11:27:05 crc kubenswrapper[4756]: I1203 11:27:05.058914 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7t27"] Dec 03 11:27:05 crc kubenswrapper[4756]: I1203 11:27:05.072946 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x7t27"] Dec 03 11:27:05 crc kubenswrapper[4756]: I1203 11:27:05.248227 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0efda8e8-882b-44b9-9bcd-479286328ec1" path="/var/lib/kubelet/pods/0efda8e8-882b-44b9-9bcd-479286328ec1/volumes" Dec 03 11:27:22 crc kubenswrapper[4756]: I1203 11:27:22.297123 4756 generic.go:334] "Generic (PLEG): container finished" podID="13546056-cc4c-4f52-81ab-79909380facb" containerID="f809dc7d46af7198bd2f4e73110a190b5301626cc5bfd8e26f754cca7ba48b0f" exitCode=0 Dec 03 11:27:22 crc kubenswrapper[4756]: I1203 11:27:22.297220 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" event={"ID":"13546056-cc4c-4f52-81ab-79909380facb","Type":"ContainerDied","Data":"f809dc7d46af7198bd2f4e73110a190b5301626cc5bfd8e26f754cca7ba48b0f"} Dec 03 11:27:22 crc kubenswrapper[4756]: I1203 11:27:22.607232 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:27:22 crc kubenswrapper[4756]: I1203 11:27:22.607321 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:27:23 crc kubenswrapper[4756]: I1203 11:27:23.742996 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" Dec 03 11:27:23 crc kubenswrapper[4756]: I1203 11:27:23.912919 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13546056-cc4c-4f52-81ab-79909380facb-inventory\") pod \"13546056-cc4c-4f52-81ab-79909380facb\" (UID: \"13546056-cc4c-4f52-81ab-79909380facb\") " Dec 03 11:27:23 crc kubenswrapper[4756]: I1203 11:27:23.913126 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfnfq\" (UniqueName: \"kubernetes.io/projected/13546056-cc4c-4f52-81ab-79909380facb-kube-api-access-pfnfq\") pod \"13546056-cc4c-4f52-81ab-79909380facb\" (UID: \"13546056-cc4c-4f52-81ab-79909380facb\") " Dec 03 11:27:23 crc kubenswrapper[4756]: I1203 11:27:23.913227 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13546056-cc4c-4f52-81ab-79909380facb-ssh-key\") pod \"13546056-cc4c-4f52-81ab-79909380facb\" (UID: \"13546056-cc4c-4f52-81ab-79909380facb\") " Dec 03 11:27:23 crc kubenswrapper[4756]: I1203 11:27:23.924482 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13546056-cc4c-4f52-81ab-79909380facb-kube-api-access-pfnfq" (OuterVolumeSpecName: "kube-api-access-pfnfq") pod "13546056-cc4c-4f52-81ab-79909380facb" (UID: "13546056-cc4c-4f52-81ab-79909380facb"). InnerVolumeSpecName "kube-api-access-pfnfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:27:23 crc kubenswrapper[4756]: I1203 11:27:23.947645 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13546056-cc4c-4f52-81ab-79909380facb-inventory" (OuterVolumeSpecName: "inventory") pod "13546056-cc4c-4f52-81ab-79909380facb" (UID: "13546056-cc4c-4f52-81ab-79909380facb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:27:23 crc kubenswrapper[4756]: I1203 11:27:23.947840 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13546056-cc4c-4f52-81ab-79909380facb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "13546056-cc4c-4f52-81ab-79909380facb" (UID: "13546056-cc4c-4f52-81ab-79909380facb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.016449 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13546056-cc4c-4f52-81ab-79909380facb-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.016497 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfnfq\" (UniqueName: \"kubernetes.io/projected/13546056-cc4c-4f52-81ab-79909380facb-kube-api-access-pfnfq\") on node \"crc\" DevicePath \"\"" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.016515 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13546056-cc4c-4f52-81ab-79909380facb-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.320378 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" event={"ID":"13546056-cc4c-4f52-81ab-79909380facb","Type":"ContainerDied","Data":"b594314cf1a8e1c37ffa535f58abb6f93f56291947273df5c865921d3f79b0df"} Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.320420 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-662qm" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.320432 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b594314cf1a8e1c37ffa535f58abb6f93f56291947273df5c865921d3f79b0df" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.418189 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk"] Dec 03 11:27:24 crc kubenswrapper[4756]: E1203 11:27:24.418727 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13546056-cc4c-4f52-81ab-79909380facb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.418754 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="13546056-cc4c-4f52-81ab-79909380facb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.419056 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="13546056-cc4c-4f52-81ab-79909380facb" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.420091 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.429877 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.429967 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.430189 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.430439 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.431322 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk\" (UID: \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.431540 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk\" (UID: \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.431638 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c69dw\" (UniqueName: \"kubernetes.io/projected/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-kube-api-access-c69dw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk\" (UID: \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.446841 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk"] Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.533938 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk\" (UID: \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.534025 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c69dw\" (UniqueName: \"kubernetes.io/projected/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-kube-api-access-c69dw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk\" (UID: \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.534124 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk\" (UID: \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.540252 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk\" (UID: \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.547554 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk\" (UID: \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.563780 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c69dw\" (UniqueName: \"kubernetes.io/projected/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-kube-api-access-c69dw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk\" (UID: \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" Dec 03 11:27:24 crc kubenswrapper[4756]: I1203 11:27:24.743529 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" Dec 03 11:27:25 crc kubenswrapper[4756]: I1203 11:27:25.294796 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk"] Dec 03 11:27:25 crc kubenswrapper[4756]: I1203 11:27:25.300991 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 11:27:25 crc kubenswrapper[4756]: I1203 11:27:25.331314 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" event={"ID":"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11","Type":"ContainerStarted","Data":"a5f45c336b252181ef8797c09f06d4893bd73a40d8280012ee540d970e9813a8"} Dec 03 11:27:26 crc kubenswrapper[4756]: I1203 11:27:26.343525 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" event={"ID":"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11","Type":"ContainerStarted","Data":"96c47b36a9567fa32b3f0f7dedbbc218914f3c801a5cc5b89685154526bd1cbc"} Dec 03 11:27:26 crc kubenswrapper[4756]: I1203 11:27:26.365124 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" podStartSLOduration=1.8811186119999999 podStartE2EDuration="2.365091475s" podCreationTimestamp="2025-12-03 11:27:24 +0000 UTC" firstStartedPulling="2025-12-03 11:27:25.300638404 +0000 UTC m=+2056.330639648" lastFinishedPulling="2025-12-03 11:27:25.784611267 +0000 UTC m=+2056.814612511" observedRunningTime="2025-12-03 11:27:26.362726691 +0000 UTC m=+2057.392727935" watchObservedRunningTime="2025-12-03 11:27:26.365091475 +0000 UTC m=+2057.395092719" Dec 03 11:27:31 crc kubenswrapper[4756]: I1203 11:27:31.397395 4756 generic.go:334] "Generic (PLEG): container finished" podID="cc23e11e-fc64-4cce-8ab5-4e63f64ccb11" containerID="96c47b36a9567fa32b3f0f7dedbbc218914f3c801a5cc5b89685154526bd1cbc" exitCode=0 Dec 03 11:27:31 crc kubenswrapper[4756]: I1203 11:27:31.397509 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" event={"ID":"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11","Type":"ContainerDied","Data":"96c47b36a9567fa32b3f0f7dedbbc218914f3c801a5cc5b89685154526bd1cbc"} Dec 03 11:27:32 crc kubenswrapper[4756]: I1203 11:27:32.890383 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" Dec 03 11:27:32 crc kubenswrapper[4756]: I1203 11:27:32.953894 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-inventory\") pod \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\" (UID: \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\") " Dec 03 11:27:32 crc kubenswrapper[4756]: I1203 11:27:32.954206 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-ssh-key\") pod \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\" (UID: \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\") " Dec 03 11:27:32 crc kubenswrapper[4756]: I1203 11:27:32.954368 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c69dw\" (UniqueName: \"kubernetes.io/projected/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-kube-api-access-c69dw\") pod \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\" (UID: \"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11\") " Dec 03 11:27:32 crc kubenswrapper[4756]: I1203 11:27:32.972850 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-kube-api-access-c69dw" (OuterVolumeSpecName: "kube-api-access-c69dw") pod "cc23e11e-fc64-4cce-8ab5-4e63f64ccb11" (UID: "cc23e11e-fc64-4cce-8ab5-4e63f64ccb11"). InnerVolumeSpecName "kube-api-access-c69dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:27:32 crc kubenswrapper[4756]: I1203 11:27:32.992977 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-inventory" (OuterVolumeSpecName: "inventory") pod "cc23e11e-fc64-4cce-8ab5-4e63f64ccb11" (UID: "cc23e11e-fc64-4cce-8ab5-4e63f64ccb11"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:27:32 crc kubenswrapper[4756]: I1203 11:27:32.993261 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cc23e11e-fc64-4cce-8ab5-4e63f64ccb11" (UID: "cc23e11e-fc64-4cce-8ab5-4e63f64ccb11"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.056934 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.056984 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.056994 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c69dw\" (UniqueName: \"kubernetes.io/projected/cc23e11e-fc64-4cce-8ab5-4e63f64ccb11-kube-api-access-c69dw\") on node \"crc\" DevicePath \"\"" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.420766 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" event={"ID":"cc23e11e-fc64-4cce-8ab5-4e63f64ccb11","Type":"ContainerDied","Data":"a5f45c336b252181ef8797c09f06d4893bd73a40d8280012ee540d970e9813a8"} Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.420826 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5f45c336b252181ef8797c09f06d4893bd73a40d8280012ee540d970e9813a8" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.420916 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.511589 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf"] Dec 03 11:27:33 crc kubenswrapper[4756]: E1203 11:27:33.512106 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc23e11e-fc64-4cce-8ab5-4e63f64ccb11" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.512125 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc23e11e-fc64-4cce-8ab5-4e63f64ccb11" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.512423 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc23e11e-fc64-4cce-8ab5-4e63f64ccb11" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.513324 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.516898 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.517225 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.517356 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.517402 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.537728 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf"] Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.568928 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q85tx\" (UniqueName: \"kubernetes.io/projected/d75f87e1-8371-4657-853a-3ad9c89bbc74-kube-api-access-q85tx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rfcxf\" (UID: \"d75f87e1-8371-4657-853a-3ad9c89bbc74\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.569205 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d75f87e1-8371-4657-853a-3ad9c89bbc74-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rfcxf\" (UID: \"d75f87e1-8371-4657-853a-3ad9c89bbc74\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.569276 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d75f87e1-8371-4657-853a-3ad9c89bbc74-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rfcxf\" (UID: \"d75f87e1-8371-4657-853a-3ad9c89bbc74\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.671303 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q85tx\" (UniqueName: \"kubernetes.io/projected/d75f87e1-8371-4657-853a-3ad9c89bbc74-kube-api-access-q85tx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rfcxf\" (UID: \"d75f87e1-8371-4657-853a-3ad9c89bbc74\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.671662 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d75f87e1-8371-4657-853a-3ad9c89bbc74-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rfcxf\" (UID: \"d75f87e1-8371-4657-853a-3ad9c89bbc74\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.671772 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d75f87e1-8371-4657-853a-3ad9c89bbc74-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rfcxf\" (UID: \"d75f87e1-8371-4657-853a-3ad9c89bbc74\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.679762 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d75f87e1-8371-4657-853a-3ad9c89bbc74-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rfcxf\" (UID: \"d75f87e1-8371-4657-853a-3ad9c89bbc74\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.685500 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d75f87e1-8371-4657-853a-3ad9c89bbc74-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rfcxf\" (UID: \"d75f87e1-8371-4657-853a-3ad9c89bbc74\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.692070 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q85tx\" (UniqueName: \"kubernetes.io/projected/d75f87e1-8371-4657-853a-3ad9c89bbc74-kube-api-access-q85tx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rfcxf\" (UID: \"d75f87e1-8371-4657-853a-3ad9c89bbc74\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" Dec 03 11:27:33 crc kubenswrapper[4756]: I1203 11:27:33.836018 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" Dec 03 11:27:34 crc kubenswrapper[4756]: I1203 11:27:34.439209 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf"] Dec 03 11:27:35 crc kubenswrapper[4756]: I1203 11:27:35.444285 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" event={"ID":"d75f87e1-8371-4657-853a-3ad9c89bbc74","Type":"ContainerStarted","Data":"4f706b57b020506c17ea9b441333838c5d72238647ff6d93baa55043d74b4a33"} Dec 03 11:27:37 crc kubenswrapper[4756]: I1203 11:27:37.466016 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" event={"ID":"d75f87e1-8371-4657-853a-3ad9c89bbc74","Type":"ContainerStarted","Data":"9d94ee8f44c28338f1bebfbeafffb41536c725d69e785c10c92efeb887917b9d"} Dec 03 11:27:37 crc kubenswrapper[4756]: I1203 11:27:37.492646 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" podStartSLOduration=2.9581658170000003 podStartE2EDuration="4.492617622s" podCreationTimestamp="2025-12-03 11:27:33 +0000 UTC" firstStartedPulling="2025-12-03 11:27:34.454828697 +0000 UTC m=+2065.484829941" lastFinishedPulling="2025-12-03 11:27:35.989280502 +0000 UTC m=+2067.019281746" observedRunningTime="2025-12-03 11:27:37.485151458 +0000 UTC m=+2068.515152712" watchObservedRunningTime="2025-12-03 11:27:37.492617622 +0000 UTC m=+2068.522618866" Dec 03 11:27:49 crc kubenswrapper[4756]: I1203 11:27:49.878029 4756 scope.go:117] "RemoveContainer" containerID="5acb5aea66d13bf3cba991c192e7cf1d5c437bd45fed9f1127abef1914bade44" Dec 03 11:27:49 crc kubenswrapper[4756]: I1203 11:27:49.957161 4756 scope.go:117] "RemoveContainer" containerID="392b2c1a8792bc2b3afd4026e934079ec84f38df9f74894e9bdec53e8c2734ab" Dec 03 11:27:52 crc kubenswrapper[4756]: I1203 11:27:52.607920 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:27:52 crc kubenswrapper[4756]: I1203 11:27:52.608533 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:27:52 crc kubenswrapper[4756]: I1203 11:27:52.608592 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 11:27:52 crc kubenswrapper[4756]: I1203 11:27:52.609458 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bee67564a166c569d665ca8e6192a5b98d0e4116e06012f6ecd98a3f28ada620"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 11:27:52 crc kubenswrapper[4756]: I1203 11:27:52.609522 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://bee67564a166c569d665ca8e6192a5b98d0e4116e06012f6ecd98a3f28ada620" gracePeriod=600 Dec 03 11:27:53 crc kubenswrapper[4756]: I1203 11:27:53.658775 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="bee67564a166c569d665ca8e6192a5b98d0e4116e06012f6ecd98a3f28ada620" exitCode=0 Dec 03 11:27:53 crc kubenswrapper[4756]: I1203 11:27:53.658867 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"bee67564a166c569d665ca8e6192a5b98d0e4116e06012f6ecd98a3f28ada620"} Dec 03 11:27:53 crc kubenswrapper[4756]: I1203 11:27:53.659174 4756 scope.go:117] "RemoveContainer" containerID="cf14bdb2ae93bae16a0b7747d8e5b148afa3e540f685390a635dd55fa0f3a367" Dec 03 11:27:54 crc kubenswrapper[4756]: I1203 11:27:54.676718 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df"} Dec 03 11:27:56 crc kubenswrapper[4756]: I1203 11:27:56.044430 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-xt2f8"] Dec 03 11:27:56 crc kubenswrapper[4756]: I1203 11:27:56.054540 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-xt2f8"] Dec 03 11:27:57 crc kubenswrapper[4756]: I1203 11:27:57.247553 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79717969-f844-4ec0-935f-2e2886597684" path="/var/lib/kubelet/pods/79717969-f844-4ec0-935f-2e2886597684/volumes" Dec 03 11:28:14 crc kubenswrapper[4756]: I1203 11:28:14.861636 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c6cbb"] Dec 03 11:28:14 crc kubenswrapper[4756]: I1203 11:28:14.865616 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:14 crc kubenswrapper[4756]: I1203 11:28:14.915414 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6cbb"] Dec 03 11:28:14 crc kubenswrapper[4756]: I1203 11:28:14.942312 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64473e02-222c-4be0-b9bb-fc1cf74afd4f-catalog-content\") pod \"community-operators-c6cbb\" (UID: \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\") " pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:14 crc kubenswrapper[4756]: I1203 11:28:14.942402 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxxpp\" (UniqueName: \"kubernetes.io/projected/64473e02-222c-4be0-b9bb-fc1cf74afd4f-kube-api-access-gxxpp\") pod \"community-operators-c6cbb\" (UID: \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\") " pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:14 crc kubenswrapper[4756]: I1203 11:28:14.942539 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64473e02-222c-4be0-b9bb-fc1cf74afd4f-utilities\") pod \"community-operators-c6cbb\" (UID: \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\") " pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:15 crc kubenswrapper[4756]: I1203 11:28:15.044467 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64473e02-222c-4be0-b9bb-fc1cf74afd4f-utilities\") pod \"community-operators-c6cbb\" (UID: \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\") " pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:15 crc kubenswrapper[4756]: I1203 11:28:15.045164 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64473e02-222c-4be0-b9bb-fc1cf74afd4f-catalog-content\") pod \"community-operators-c6cbb\" (UID: \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\") " pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:15 crc kubenswrapper[4756]: I1203 11:28:15.045211 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxxpp\" (UniqueName: \"kubernetes.io/projected/64473e02-222c-4be0-b9bb-fc1cf74afd4f-kube-api-access-gxxpp\") pod \"community-operators-c6cbb\" (UID: \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\") " pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:15 crc kubenswrapper[4756]: I1203 11:28:15.045208 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64473e02-222c-4be0-b9bb-fc1cf74afd4f-utilities\") pod \"community-operators-c6cbb\" (UID: \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\") " pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:15 crc kubenswrapper[4756]: I1203 11:28:15.045798 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64473e02-222c-4be0-b9bb-fc1cf74afd4f-catalog-content\") pod \"community-operators-c6cbb\" (UID: \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\") " pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:15 crc kubenswrapper[4756]: I1203 11:28:15.076388 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxxpp\" (UniqueName: \"kubernetes.io/projected/64473e02-222c-4be0-b9bb-fc1cf74afd4f-kube-api-access-gxxpp\") pod \"community-operators-c6cbb\" (UID: \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\") " pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:15 crc kubenswrapper[4756]: I1203 11:28:15.274559 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:15 crc kubenswrapper[4756]: I1203 11:28:15.871579 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6cbb"] Dec 03 11:28:15 crc kubenswrapper[4756]: I1203 11:28:15.898946 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6cbb" event={"ID":"64473e02-222c-4be0-b9bb-fc1cf74afd4f","Type":"ContainerStarted","Data":"09e3905375111787db1756087562381c0b727cb328f9957a7f760fc1a2ffceef"} Dec 03 11:28:16 crc kubenswrapper[4756]: I1203 11:28:16.910890 4756 generic.go:334] "Generic (PLEG): container finished" podID="64473e02-222c-4be0-b9bb-fc1cf74afd4f" containerID="bb7ec4b37978f3bee90d302d7fac1793c677ec2fabf18684648a285e9daae794" exitCode=0 Dec 03 11:28:16 crc kubenswrapper[4756]: I1203 11:28:16.911000 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6cbb" event={"ID":"64473e02-222c-4be0-b9bb-fc1cf74afd4f","Type":"ContainerDied","Data":"bb7ec4b37978f3bee90d302d7fac1793c677ec2fabf18684648a285e9daae794"} Dec 03 11:28:17 crc kubenswrapper[4756]: I1203 11:28:17.923066 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6cbb" event={"ID":"64473e02-222c-4be0-b9bb-fc1cf74afd4f","Type":"ContainerStarted","Data":"bf5761e031cb79bdf0c5f61a690b2ba451c2d73436626c5c3ffbc75b1ff29f76"} Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.234578 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8s47s"] Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.237560 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.257541 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8s47s"] Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.321665 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59867473-846d-4ab0-8cda-b3b40a964ae2-catalog-content\") pod \"redhat-operators-8s47s\" (UID: \"59867473-846d-4ab0-8cda-b3b40a964ae2\") " pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.321740 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nctsv\" (UniqueName: \"kubernetes.io/projected/59867473-846d-4ab0-8cda-b3b40a964ae2-kube-api-access-nctsv\") pod \"redhat-operators-8s47s\" (UID: \"59867473-846d-4ab0-8cda-b3b40a964ae2\") " pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.321775 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59867473-846d-4ab0-8cda-b3b40a964ae2-utilities\") pod \"redhat-operators-8s47s\" (UID: \"59867473-846d-4ab0-8cda-b3b40a964ae2\") " pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.424239 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59867473-846d-4ab0-8cda-b3b40a964ae2-catalog-content\") pod \"redhat-operators-8s47s\" (UID: \"59867473-846d-4ab0-8cda-b3b40a964ae2\") " pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.424350 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nctsv\" (UniqueName: \"kubernetes.io/projected/59867473-846d-4ab0-8cda-b3b40a964ae2-kube-api-access-nctsv\") pod \"redhat-operators-8s47s\" (UID: \"59867473-846d-4ab0-8cda-b3b40a964ae2\") " pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.424388 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59867473-846d-4ab0-8cda-b3b40a964ae2-utilities\") pod \"redhat-operators-8s47s\" (UID: \"59867473-846d-4ab0-8cda-b3b40a964ae2\") " pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.425001 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59867473-846d-4ab0-8cda-b3b40a964ae2-catalog-content\") pod \"redhat-operators-8s47s\" (UID: \"59867473-846d-4ab0-8cda-b3b40a964ae2\") " pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.425158 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59867473-846d-4ab0-8cda-b3b40a964ae2-utilities\") pod \"redhat-operators-8s47s\" (UID: \"59867473-846d-4ab0-8cda-b3b40a964ae2\") " pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.451516 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nctsv\" (UniqueName: \"kubernetes.io/projected/59867473-846d-4ab0-8cda-b3b40a964ae2-kube-api-access-nctsv\") pod \"redhat-operators-8s47s\" (UID: \"59867473-846d-4ab0-8cda-b3b40a964ae2\") " pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.559186 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.968878 4756 generic.go:334] "Generic (PLEG): container finished" podID="64473e02-222c-4be0-b9bb-fc1cf74afd4f" containerID="bf5761e031cb79bdf0c5f61a690b2ba451c2d73436626c5c3ffbc75b1ff29f76" exitCode=0 Dec 03 11:28:18 crc kubenswrapper[4756]: I1203 11:28:18.969446 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6cbb" event={"ID":"64473e02-222c-4be0-b9bb-fc1cf74afd4f","Type":"ContainerDied","Data":"bf5761e031cb79bdf0c5f61a690b2ba451c2d73436626c5c3ffbc75b1ff29f76"} Dec 03 11:28:19 crc kubenswrapper[4756]: I1203 11:28:19.259149 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8s47s"] Dec 03 11:28:19 crc kubenswrapper[4756]: I1203 11:28:19.982000 4756 generic.go:334] "Generic (PLEG): container finished" podID="d75f87e1-8371-4657-853a-3ad9c89bbc74" containerID="9d94ee8f44c28338f1bebfbeafffb41536c725d69e785c10c92efeb887917b9d" exitCode=0 Dec 03 11:28:19 crc kubenswrapper[4756]: I1203 11:28:19.982097 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" event={"ID":"d75f87e1-8371-4657-853a-3ad9c89bbc74","Type":"ContainerDied","Data":"9d94ee8f44c28338f1bebfbeafffb41536c725d69e785c10c92efeb887917b9d"} Dec 03 11:28:19 crc kubenswrapper[4756]: I1203 11:28:19.996765 4756 generic.go:334] "Generic (PLEG): container finished" podID="59867473-846d-4ab0-8cda-b3b40a964ae2" containerID="7cee5477f952b7d49c5f778e6915ca0ed38ca801ba99ee912ce1fe3b6020e554" exitCode=0 Dec 03 11:28:19 crc kubenswrapper[4756]: I1203 11:28:19.996893 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s47s" event={"ID":"59867473-846d-4ab0-8cda-b3b40a964ae2","Type":"ContainerDied","Data":"7cee5477f952b7d49c5f778e6915ca0ed38ca801ba99ee912ce1fe3b6020e554"} Dec 03 11:28:19 crc kubenswrapper[4756]: I1203 11:28:19.997017 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s47s" event={"ID":"59867473-846d-4ab0-8cda-b3b40a964ae2","Type":"ContainerStarted","Data":"bf3599b1ef9caba9893513ceb10c8f7216eb32118f037ede2600a8ba1dd8d45a"} Dec 03 11:28:20 crc kubenswrapper[4756]: I1203 11:28:20.001384 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6cbb" event={"ID":"64473e02-222c-4be0-b9bb-fc1cf74afd4f","Type":"ContainerStarted","Data":"f5aaaa911465d69d96d3ada65a49c7dab09eaa949916f011fdbfb9489f6e3805"} Dec 03 11:28:20 crc kubenswrapper[4756]: I1203 11:28:20.085332 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c6cbb" podStartSLOduration=3.4763287370000002 podStartE2EDuration="6.085279658s" podCreationTimestamp="2025-12-03 11:28:14 +0000 UTC" firstStartedPulling="2025-12-03 11:28:16.915852736 +0000 UTC m=+2107.945853980" lastFinishedPulling="2025-12-03 11:28:19.524803657 +0000 UTC m=+2110.554804901" observedRunningTime="2025-12-03 11:28:20.070932298 +0000 UTC m=+2111.100933542" watchObservedRunningTime="2025-12-03 11:28:20.085279658 +0000 UTC m=+2111.115280902" Dec 03 11:28:21 crc kubenswrapper[4756]: I1203 11:28:21.544978 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" Dec 03 11:28:21 crc kubenswrapper[4756]: I1203 11:28:21.618178 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d75f87e1-8371-4657-853a-3ad9c89bbc74-inventory\") pod \"d75f87e1-8371-4657-853a-3ad9c89bbc74\" (UID: \"d75f87e1-8371-4657-853a-3ad9c89bbc74\") " Dec 03 11:28:21 crc kubenswrapper[4756]: I1203 11:28:21.618265 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d75f87e1-8371-4657-853a-3ad9c89bbc74-ssh-key\") pod \"d75f87e1-8371-4657-853a-3ad9c89bbc74\" (UID: \"d75f87e1-8371-4657-853a-3ad9c89bbc74\") " Dec 03 11:28:21 crc kubenswrapper[4756]: I1203 11:28:21.618309 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q85tx\" (UniqueName: \"kubernetes.io/projected/d75f87e1-8371-4657-853a-3ad9c89bbc74-kube-api-access-q85tx\") pod \"d75f87e1-8371-4657-853a-3ad9c89bbc74\" (UID: \"d75f87e1-8371-4657-853a-3ad9c89bbc74\") " Dec 03 11:28:21 crc kubenswrapper[4756]: I1203 11:28:21.638877 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d75f87e1-8371-4657-853a-3ad9c89bbc74-kube-api-access-q85tx" (OuterVolumeSpecName: "kube-api-access-q85tx") pod "d75f87e1-8371-4657-853a-3ad9c89bbc74" (UID: "d75f87e1-8371-4657-853a-3ad9c89bbc74"). InnerVolumeSpecName "kube-api-access-q85tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:28:21 crc kubenswrapper[4756]: I1203 11:28:21.665096 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d75f87e1-8371-4657-853a-3ad9c89bbc74-inventory" (OuterVolumeSpecName: "inventory") pod "d75f87e1-8371-4657-853a-3ad9c89bbc74" (UID: "d75f87e1-8371-4657-853a-3ad9c89bbc74"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:28:21 crc kubenswrapper[4756]: I1203 11:28:21.722539 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d75f87e1-8371-4657-853a-3ad9c89bbc74-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:28:21 crc kubenswrapper[4756]: I1203 11:28:21.722593 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q85tx\" (UniqueName: \"kubernetes.io/projected/d75f87e1-8371-4657-853a-3ad9c89bbc74-kube-api-access-q85tx\") on node \"crc\" DevicePath \"\"" Dec 03 11:28:21 crc kubenswrapper[4756]: I1203 11:28:21.771157 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d75f87e1-8371-4657-853a-3ad9c89bbc74-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d75f87e1-8371-4657-853a-3ad9c89bbc74" (UID: "d75f87e1-8371-4657-853a-3ad9c89bbc74"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:28:21 crc kubenswrapper[4756]: I1203 11:28:21.825320 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d75f87e1-8371-4657-853a-3ad9c89bbc74-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.023396 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" event={"ID":"d75f87e1-8371-4657-853a-3ad9c89bbc74","Type":"ContainerDied","Data":"4f706b57b020506c17ea9b441333838c5d72238647ff6d93baa55043d74b4a33"} Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.023450 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f706b57b020506c17ea9b441333838c5d72238647ff6d93baa55043d74b4a33" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.023531 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rfcxf" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.074836 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s47s" event={"ID":"59867473-846d-4ab0-8cda-b3b40a964ae2","Type":"ContainerStarted","Data":"19ac68239807acb05364d78ec5042a16cc1d8e9994ee8f8320190d59320e50c1"} Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.164016 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs"] Dec 03 11:28:22 crc kubenswrapper[4756]: E1203 11:28:22.164649 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d75f87e1-8371-4657-853a-3ad9c89bbc74" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.164677 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="d75f87e1-8371-4657-853a-3ad9c89bbc74" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.165013 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="d75f87e1-8371-4657-853a-3ad9c89bbc74" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.166327 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.174596 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.175216 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.175317 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.175448 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.183646 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs"] Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.234786 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db190d36-8d79-4e74-8ba0-898731235d0a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs\" (UID: \"db190d36-8d79-4e74-8ba0-898731235d0a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.234875 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j89c6\" (UniqueName: \"kubernetes.io/projected/db190d36-8d79-4e74-8ba0-898731235d0a-kube-api-access-j89c6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs\" (UID: \"db190d36-8d79-4e74-8ba0-898731235d0a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.234937 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db190d36-8d79-4e74-8ba0-898731235d0a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs\" (UID: \"db190d36-8d79-4e74-8ba0-898731235d0a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.337140 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db190d36-8d79-4e74-8ba0-898731235d0a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs\" (UID: \"db190d36-8d79-4e74-8ba0-898731235d0a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.337293 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j89c6\" (UniqueName: \"kubernetes.io/projected/db190d36-8d79-4e74-8ba0-898731235d0a-kube-api-access-j89c6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs\" (UID: \"db190d36-8d79-4e74-8ba0-898731235d0a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.338831 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db190d36-8d79-4e74-8ba0-898731235d0a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs\" (UID: \"db190d36-8d79-4e74-8ba0-898731235d0a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.343291 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db190d36-8d79-4e74-8ba0-898731235d0a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs\" (UID: \"db190d36-8d79-4e74-8ba0-898731235d0a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.344062 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db190d36-8d79-4e74-8ba0-898731235d0a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs\" (UID: \"db190d36-8d79-4e74-8ba0-898731235d0a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.357330 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j89c6\" (UniqueName: \"kubernetes.io/projected/db190d36-8d79-4e74-8ba0-898731235d0a-kube-api-access-j89c6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs\" (UID: \"db190d36-8d79-4e74-8ba0-898731235d0a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" Dec 03 11:28:22 crc kubenswrapper[4756]: I1203 11:28:22.499293 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" Dec 03 11:28:23 crc kubenswrapper[4756]: I1203 11:28:23.060272 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs"] Dec 03 11:28:23 crc kubenswrapper[4756]: I1203 11:28:23.089148 4756 generic.go:334] "Generic (PLEG): container finished" podID="59867473-846d-4ab0-8cda-b3b40a964ae2" containerID="19ac68239807acb05364d78ec5042a16cc1d8e9994ee8f8320190d59320e50c1" exitCode=0 Dec 03 11:28:23 crc kubenswrapper[4756]: I1203 11:28:23.089277 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s47s" event={"ID":"59867473-846d-4ab0-8cda-b3b40a964ae2","Type":"ContainerDied","Data":"19ac68239807acb05364d78ec5042a16cc1d8e9994ee8f8320190d59320e50c1"} Dec 03 11:28:23 crc kubenswrapper[4756]: I1203 11:28:23.093853 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" event={"ID":"db190d36-8d79-4e74-8ba0-898731235d0a","Type":"ContainerStarted","Data":"32e022abdd6347156d9a05d5c25cb0f1546ced1b3a1460c82ae902a4c6d59b3b"} Dec 03 11:28:25 crc kubenswrapper[4756]: I1203 11:28:25.274698 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:25 crc kubenswrapper[4756]: I1203 11:28:25.275703 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:25 crc kubenswrapper[4756]: I1203 11:28:25.338307 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:26 crc kubenswrapper[4756]: I1203 11:28:26.125170 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" event={"ID":"db190d36-8d79-4e74-8ba0-898731235d0a","Type":"ContainerStarted","Data":"1f0c1e0a403924e290a69447d194cb11175d9894d599dcc556809d4155a77898"} Dec 03 11:28:26 crc kubenswrapper[4756]: I1203 11:28:26.130119 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s47s" event={"ID":"59867473-846d-4ab0-8cda-b3b40a964ae2","Type":"ContainerStarted","Data":"6ef3e0aadf9a2e62d9540d9098f269dcdc1133343ad6dd6fffed930dcfa2c833"} Dec 03 11:28:26 crc kubenswrapper[4756]: I1203 11:28:26.144981 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" podStartSLOduration=3.295726845 podStartE2EDuration="4.144945458s" podCreationTimestamp="2025-12-03 11:28:22 +0000 UTC" firstStartedPulling="2025-12-03 11:28:23.067637965 +0000 UTC m=+2114.097639209" lastFinishedPulling="2025-12-03 11:28:23.916856578 +0000 UTC m=+2114.946857822" observedRunningTime="2025-12-03 11:28:26.141772748 +0000 UTC m=+2117.171773992" watchObservedRunningTime="2025-12-03 11:28:26.144945458 +0000 UTC m=+2117.174946702" Dec 03 11:28:26 crc kubenswrapper[4756]: I1203 11:28:26.166751 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8s47s" podStartSLOduration=4.364131239 podStartE2EDuration="8.16672317s" podCreationTimestamp="2025-12-03 11:28:18 +0000 UTC" firstStartedPulling="2025-12-03 11:28:19.99890551 +0000 UTC m=+2111.028906754" lastFinishedPulling="2025-12-03 11:28:23.801497441 +0000 UTC m=+2114.831498685" observedRunningTime="2025-12-03 11:28:26.164624995 +0000 UTC m=+2117.194626239" watchObservedRunningTime="2025-12-03 11:28:26.16672317 +0000 UTC m=+2117.196724414" Dec 03 11:28:26 crc kubenswrapper[4756]: I1203 11:28:26.228782 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:28 crc kubenswrapper[4756]: I1203 11:28:28.426400 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c6cbb"] Dec 03 11:28:28 crc kubenswrapper[4756]: I1203 11:28:28.427187 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c6cbb" podUID="64473e02-222c-4be0-b9bb-fc1cf74afd4f" containerName="registry-server" containerID="cri-o://f5aaaa911465d69d96d3ada65a49c7dab09eaa949916f011fdbfb9489f6e3805" gracePeriod=2 Dec 03 11:28:28 crc kubenswrapper[4756]: I1203 11:28:28.560190 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:28 crc kubenswrapper[4756]: I1203 11:28:28.560275 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:29 crc kubenswrapper[4756]: I1203 11:28:29.180690 4756 generic.go:334] "Generic (PLEG): container finished" podID="64473e02-222c-4be0-b9bb-fc1cf74afd4f" containerID="f5aaaa911465d69d96d3ada65a49c7dab09eaa949916f011fdbfb9489f6e3805" exitCode=0 Dec 03 11:28:29 crc kubenswrapper[4756]: I1203 11:28:29.181091 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6cbb" event={"ID":"64473e02-222c-4be0-b9bb-fc1cf74afd4f","Type":"ContainerDied","Data":"f5aaaa911465d69d96d3ada65a49c7dab09eaa949916f011fdbfb9489f6e3805"} Dec 03 11:28:29 crc kubenswrapper[4756]: I1203 11:28:29.627143 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8s47s" podUID="59867473-846d-4ab0-8cda-b3b40a964ae2" containerName="registry-server" probeResult="failure" output=< Dec 03 11:28:29 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Dec 03 11:28:29 crc kubenswrapper[4756]: > Dec 03 11:28:29 crc kubenswrapper[4756]: I1203 11:28:29.867848 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.029304 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64473e02-222c-4be0-b9bb-fc1cf74afd4f-catalog-content\") pod \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\" (UID: \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\") " Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.029464 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64473e02-222c-4be0-b9bb-fc1cf74afd4f-utilities\") pod \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\" (UID: \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\") " Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.029938 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxxpp\" (UniqueName: \"kubernetes.io/projected/64473e02-222c-4be0-b9bb-fc1cf74afd4f-kube-api-access-gxxpp\") pod \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\" (UID: \"64473e02-222c-4be0-b9bb-fc1cf74afd4f\") " Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.032090 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64473e02-222c-4be0-b9bb-fc1cf74afd4f-utilities" (OuterVolumeSpecName: "utilities") pod "64473e02-222c-4be0-b9bb-fc1cf74afd4f" (UID: "64473e02-222c-4be0-b9bb-fc1cf74afd4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.041485 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64473e02-222c-4be0-b9bb-fc1cf74afd4f-kube-api-access-gxxpp" (OuterVolumeSpecName: "kube-api-access-gxxpp") pod "64473e02-222c-4be0-b9bb-fc1cf74afd4f" (UID: "64473e02-222c-4be0-b9bb-fc1cf74afd4f"). InnerVolumeSpecName "kube-api-access-gxxpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.080816 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64473e02-222c-4be0-b9bb-fc1cf74afd4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64473e02-222c-4be0-b9bb-fc1cf74afd4f" (UID: "64473e02-222c-4be0-b9bb-fc1cf74afd4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.133497 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxxpp\" (UniqueName: \"kubernetes.io/projected/64473e02-222c-4be0-b9bb-fc1cf74afd4f-kube-api-access-gxxpp\") on node \"crc\" DevicePath \"\"" Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.133545 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64473e02-222c-4be0-b9bb-fc1cf74afd4f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.133555 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64473e02-222c-4be0-b9bb-fc1cf74afd4f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.194591 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6cbb" event={"ID":"64473e02-222c-4be0-b9bb-fc1cf74afd4f","Type":"ContainerDied","Data":"09e3905375111787db1756087562381c0b727cb328f9957a7f760fc1a2ffceef"} Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.194660 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6cbb" Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.194678 4756 scope.go:117] "RemoveContainer" containerID="f5aaaa911465d69d96d3ada65a49c7dab09eaa949916f011fdbfb9489f6e3805" Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.236578 4756 scope.go:117] "RemoveContainer" containerID="bf5761e031cb79bdf0c5f61a690b2ba451c2d73436626c5c3ffbc75b1ff29f76" Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.237300 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c6cbb"] Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.248265 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c6cbb"] Dec 03 11:28:30 crc kubenswrapper[4756]: I1203 11:28:30.269648 4756 scope.go:117] "RemoveContainer" containerID="bb7ec4b37978f3bee90d302d7fac1793c677ec2fabf18684648a285e9daae794" Dec 03 11:28:31 crc kubenswrapper[4756]: I1203 11:28:31.246774 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64473e02-222c-4be0-b9bb-fc1cf74afd4f" path="/var/lib/kubelet/pods/64473e02-222c-4be0-b9bb-fc1cf74afd4f/volumes" Dec 03 11:28:38 crc kubenswrapper[4756]: I1203 11:28:38.619739 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:38 crc kubenswrapper[4756]: I1203 11:28:38.679271 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:38 crc kubenswrapper[4756]: I1203 11:28:38.863237 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8s47s"] Dec 03 11:28:40 crc kubenswrapper[4756]: I1203 11:28:40.307055 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8s47s" podUID="59867473-846d-4ab0-8cda-b3b40a964ae2" containerName="registry-server" containerID="cri-o://6ef3e0aadf9a2e62d9540d9098f269dcdc1133343ad6dd6fffed930dcfa2c833" gracePeriod=2 Dec 03 11:28:40 crc kubenswrapper[4756]: I1203 11:28:40.936659 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.035083 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59867473-846d-4ab0-8cda-b3b40a964ae2-utilities\") pod \"59867473-846d-4ab0-8cda-b3b40a964ae2\" (UID: \"59867473-846d-4ab0-8cda-b3b40a964ae2\") " Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.035168 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59867473-846d-4ab0-8cda-b3b40a964ae2-catalog-content\") pod \"59867473-846d-4ab0-8cda-b3b40a964ae2\" (UID: \"59867473-846d-4ab0-8cda-b3b40a964ae2\") " Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.035213 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nctsv\" (UniqueName: \"kubernetes.io/projected/59867473-846d-4ab0-8cda-b3b40a964ae2-kube-api-access-nctsv\") pod \"59867473-846d-4ab0-8cda-b3b40a964ae2\" (UID: \"59867473-846d-4ab0-8cda-b3b40a964ae2\") " Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.037285 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59867473-846d-4ab0-8cda-b3b40a964ae2-utilities" (OuterVolumeSpecName: "utilities") pod "59867473-846d-4ab0-8cda-b3b40a964ae2" (UID: "59867473-846d-4ab0-8cda-b3b40a964ae2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.057087 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59867473-846d-4ab0-8cda-b3b40a964ae2-kube-api-access-nctsv" (OuterVolumeSpecName: "kube-api-access-nctsv") pod "59867473-846d-4ab0-8cda-b3b40a964ae2" (UID: "59867473-846d-4ab0-8cda-b3b40a964ae2"). InnerVolumeSpecName "kube-api-access-nctsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.138747 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59867473-846d-4ab0-8cda-b3b40a964ae2-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.138802 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nctsv\" (UniqueName: \"kubernetes.io/projected/59867473-846d-4ab0-8cda-b3b40a964ae2-kube-api-access-nctsv\") on node \"crc\" DevicePath \"\"" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.159332 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59867473-846d-4ab0-8cda-b3b40a964ae2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59867473-846d-4ab0-8cda-b3b40a964ae2" (UID: "59867473-846d-4ab0-8cda-b3b40a964ae2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.240541 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59867473-846d-4ab0-8cda-b3b40a964ae2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.320231 4756 generic.go:334] "Generic (PLEG): container finished" podID="59867473-846d-4ab0-8cda-b3b40a964ae2" containerID="6ef3e0aadf9a2e62d9540d9098f269dcdc1133343ad6dd6fffed930dcfa2c833" exitCode=0 Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.320295 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s47s" event={"ID":"59867473-846d-4ab0-8cda-b3b40a964ae2","Type":"ContainerDied","Data":"6ef3e0aadf9a2e62d9540d9098f269dcdc1133343ad6dd6fffed930dcfa2c833"} Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.320308 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s47s" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.320346 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s47s" event={"ID":"59867473-846d-4ab0-8cda-b3b40a964ae2","Type":"ContainerDied","Data":"bf3599b1ef9caba9893513ceb10c8f7216eb32118f037ede2600a8ba1dd8d45a"} Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.320376 4756 scope.go:117] "RemoveContainer" containerID="6ef3e0aadf9a2e62d9540d9098f269dcdc1133343ad6dd6fffed930dcfa2c833" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.348325 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8s47s"] Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.378581 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8s47s"] Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.388863 4756 scope.go:117] "RemoveContainer" containerID="19ac68239807acb05364d78ec5042a16cc1d8e9994ee8f8320190d59320e50c1" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.467310 4756 scope.go:117] "RemoveContainer" containerID="7cee5477f952b7d49c5f778e6915ca0ed38ca801ba99ee912ce1fe3b6020e554" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.492865 4756 scope.go:117] "RemoveContainer" containerID="6ef3e0aadf9a2e62d9540d9098f269dcdc1133343ad6dd6fffed930dcfa2c833" Dec 03 11:28:41 crc kubenswrapper[4756]: E1203 11:28:41.493681 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ef3e0aadf9a2e62d9540d9098f269dcdc1133343ad6dd6fffed930dcfa2c833\": container with ID starting with 6ef3e0aadf9a2e62d9540d9098f269dcdc1133343ad6dd6fffed930dcfa2c833 not found: ID does not exist" containerID="6ef3e0aadf9a2e62d9540d9098f269dcdc1133343ad6dd6fffed930dcfa2c833" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.493758 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef3e0aadf9a2e62d9540d9098f269dcdc1133343ad6dd6fffed930dcfa2c833"} err="failed to get container status \"6ef3e0aadf9a2e62d9540d9098f269dcdc1133343ad6dd6fffed930dcfa2c833\": rpc error: code = NotFound desc = could not find container \"6ef3e0aadf9a2e62d9540d9098f269dcdc1133343ad6dd6fffed930dcfa2c833\": container with ID starting with 6ef3e0aadf9a2e62d9540d9098f269dcdc1133343ad6dd6fffed930dcfa2c833 not found: ID does not exist" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.493800 4756 scope.go:117] "RemoveContainer" containerID="19ac68239807acb05364d78ec5042a16cc1d8e9994ee8f8320190d59320e50c1" Dec 03 11:28:41 crc kubenswrapper[4756]: E1203 11:28:41.494740 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19ac68239807acb05364d78ec5042a16cc1d8e9994ee8f8320190d59320e50c1\": container with ID starting with 19ac68239807acb05364d78ec5042a16cc1d8e9994ee8f8320190d59320e50c1 not found: ID does not exist" containerID="19ac68239807acb05364d78ec5042a16cc1d8e9994ee8f8320190d59320e50c1" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.494796 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19ac68239807acb05364d78ec5042a16cc1d8e9994ee8f8320190d59320e50c1"} err="failed to get container status \"19ac68239807acb05364d78ec5042a16cc1d8e9994ee8f8320190d59320e50c1\": rpc error: code = NotFound desc = could not find container \"19ac68239807acb05364d78ec5042a16cc1d8e9994ee8f8320190d59320e50c1\": container with ID starting with 19ac68239807acb05364d78ec5042a16cc1d8e9994ee8f8320190d59320e50c1 not found: ID does not exist" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.494824 4756 scope.go:117] "RemoveContainer" containerID="7cee5477f952b7d49c5f778e6915ca0ed38ca801ba99ee912ce1fe3b6020e554" Dec 03 11:28:41 crc kubenswrapper[4756]: E1203 11:28:41.495170 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cee5477f952b7d49c5f778e6915ca0ed38ca801ba99ee912ce1fe3b6020e554\": container with ID starting with 7cee5477f952b7d49c5f778e6915ca0ed38ca801ba99ee912ce1fe3b6020e554 not found: ID does not exist" containerID="7cee5477f952b7d49c5f778e6915ca0ed38ca801ba99ee912ce1fe3b6020e554" Dec 03 11:28:41 crc kubenswrapper[4756]: I1203 11:28:41.495229 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cee5477f952b7d49c5f778e6915ca0ed38ca801ba99ee912ce1fe3b6020e554"} err="failed to get container status \"7cee5477f952b7d49c5f778e6915ca0ed38ca801ba99ee912ce1fe3b6020e554\": rpc error: code = NotFound desc = could not find container \"7cee5477f952b7d49c5f778e6915ca0ed38ca801ba99ee912ce1fe3b6020e554\": container with ID starting with 7cee5477f952b7d49c5f778e6915ca0ed38ca801ba99ee912ce1fe3b6020e554 not found: ID does not exist" Dec 03 11:28:43 crc kubenswrapper[4756]: I1203 11:28:43.245744 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59867473-846d-4ab0-8cda-b3b40a964ae2" path="/var/lib/kubelet/pods/59867473-846d-4ab0-8cda-b3b40a964ae2/volumes" Dec 03 11:28:50 crc kubenswrapper[4756]: I1203 11:28:50.078031 4756 scope.go:117] "RemoveContainer" containerID="6859e503d811c71c3b86193738a8438ff3b2257d38bea4d5194f8be348f8ea63" Dec 03 11:29:24 crc kubenswrapper[4756]: I1203 11:29:24.791525 4756 generic.go:334] "Generic (PLEG): container finished" podID="db190d36-8d79-4e74-8ba0-898731235d0a" containerID="1f0c1e0a403924e290a69447d194cb11175d9894d599dcc556809d4155a77898" exitCode=0 Dec 03 11:29:24 crc kubenswrapper[4756]: I1203 11:29:24.791770 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" event={"ID":"db190d36-8d79-4e74-8ba0-898731235d0a","Type":"ContainerDied","Data":"1f0c1e0a403924e290a69447d194cb11175d9894d599dcc556809d4155a77898"} Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.287784 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.392884 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j89c6\" (UniqueName: \"kubernetes.io/projected/db190d36-8d79-4e74-8ba0-898731235d0a-kube-api-access-j89c6\") pod \"db190d36-8d79-4e74-8ba0-898731235d0a\" (UID: \"db190d36-8d79-4e74-8ba0-898731235d0a\") " Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.393043 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db190d36-8d79-4e74-8ba0-898731235d0a-ssh-key\") pod \"db190d36-8d79-4e74-8ba0-898731235d0a\" (UID: \"db190d36-8d79-4e74-8ba0-898731235d0a\") " Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.393184 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db190d36-8d79-4e74-8ba0-898731235d0a-inventory\") pod \"db190d36-8d79-4e74-8ba0-898731235d0a\" (UID: \"db190d36-8d79-4e74-8ba0-898731235d0a\") " Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.402533 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db190d36-8d79-4e74-8ba0-898731235d0a-kube-api-access-j89c6" (OuterVolumeSpecName: "kube-api-access-j89c6") pod "db190d36-8d79-4e74-8ba0-898731235d0a" (UID: "db190d36-8d79-4e74-8ba0-898731235d0a"). InnerVolumeSpecName "kube-api-access-j89c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.424695 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db190d36-8d79-4e74-8ba0-898731235d0a-inventory" (OuterVolumeSpecName: "inventory") pod "db190d36-8d79-4e74-8ba0-898731235d0a" (UID: "db190d36-8d79-4e74-8ba0-898731235d0a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.436700 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db190d36-8d79-4e74-8ba0-898731235d0a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "db190d36-8d79-4e74-8ba0-898731235d0a" (UID: "db190d36-8d79-4e74-8ba0-898731235d0a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.496426 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j89c6\" (UniqueName: \"kubernetes.io/projected/db190d36-8d79-4e74-8ba0-898731235d0a-kube-api-access-j89c6\") on node \"crc\" DevicePath \"\"" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.496473 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/db190d36-8d79-4e74-8ba0-898731235d0a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.496487 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db190d36-8d79-4e74-8ba0-898731235d0a-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.816761 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.816715 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs" event={"ID":"db190d36-8d79-4e74-8ba0-898731235d0a","Type":"ContainerDied","Data":"32e022abdd6347156d9a05d5c25cb0f1546ced1b3a1460c82ae902a4c6d59b3b"} Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.816897 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32e022abdd6347156d9a05d5c25cb0f1546ced1b3a1460c82ae902a4c6d59b3b" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.912338 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rmcgx"] Dec 03 11:29:26 crc kubenswrapper[4756]: E1203 11:29:26.912888 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64473e02-222c-4be0-b9bb-fc1cf74afd4f" containerName="extract-utilities" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.912916 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="64473e02-222c-4be0-b9bb-fc1cf74afd4f" containerName="extract-utilities" Dec 03 11:29:26 crc kubenswrapper[4756]: E1203 11:29:26.912927 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db190d36-8d79-4e74-8ba0-898731235d0a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.912936 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="db190d36-8d79-4e74-8ba0-898731235d0a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 11:29:26 crc kubenswrapper[4756]: E1203 11:29:26.912978 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64473e02-222c-4be0-b9bb-fc1cf74afd4f" containerName="registry-server" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.912987 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="64473e02-222c-4be0-b9bb-fc1cf74afd4f" containerName="registry-server" Dec 03 11:29:26 crc kubenswrapper[4756]: E1203 11:29:26.912999 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59867473-846d-4ab0-8cda-b3b40a964ae2" containerName="extract-content" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.913005 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="59867473-846d-4ab0-8cda-b3b40a964ae2" containerName="extract-content" Dec 03 11:29:26 crc kubenswrapper[4756]: E1203 11:29:26.913012 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59867473-846d-4ab0-8cda-b3b40a964ae2" containerName="registry-server" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.913019 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="59867473-846d-4ab0-8cda-b3b40a964ae2" containerName="registry-server" Dec 03 11:29:26 crc kubenswrapper[4756]: E1203 11:29:26.913033 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64473e02-222c-4be0-b9bb-fc1cf74afd4f" containerName="extract-content" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.913038 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="64473e02-222c-4be0-b9bb-fc1cf74afd4f" containerName="extract-content" Dec 03 11:29:26 crc kubenswrapper[4756]: E1203 11:29:26.913071 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59867473-846d-4ab0-8cda-b3b40a964ae2" containerName="extract-utilities" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.913084 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="59867473-846d-4ab0-8cda-b3b40a964ae2" containerName="extract-utilities" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.913307 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="59867473-846d-4ab0-8cda-b3b40a964ae2" containerName="registry-server" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.913348 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="db190d36-8d79-4e74-8ba0-898731235d0a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.913370 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="64473e02-222c-4be0-b9bb-fc1cf74afd4f" containerName="registry-server" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.914211 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.917842 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.918106 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.918432 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.919147 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:29:26 crc kubenswrapper[4756]: I1203 11:29:26.929333 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rmcgx"] Dec 03 11:29:27 crc kubenswrapper[4756]: I1203 11:29:27.008137 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rmcgx\" (UID: \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\") " pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" Dec 03 11:29:27 crc kubenswrapper[4756]: I1203 11:29:27.008802 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8pk9\" (UniqueName: \"kubernetes.io/projected/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-kube-api-access-m8pk9\") pod \"ssh-known-hosts-edpm-deployment-rmcgx\" (UID: \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\") " pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" Dec 03 11:29:27 crc kubenswrapper[4756]: I1203 11:29:27.008886 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rmcgx\" (UID: \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\") " pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" Dec 03 11:29:27 crc kubenswrapper[4756]: I1203 11:29:27.111891 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rmcgx\" (UID: \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\") " pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" Dec 03 11:29:27 crc kubenswrapper[4756]: I1203 11:29:27.111991 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8pk9\" (UniqueName: \"kubernetes.io/projected/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-kube-api-access-m8pk9\") pod \"ssh-known-hosts-edpm-deployment-rmcgx\" (UID: \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\") " pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" Dec 03 11:29:27 crc kubenswrapper[4756]: I1203 11:29:27.112037 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rmcgx\" (UID: \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\") " pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" Dec 03 11:29:27 crc kubenswrapper[4756]: I1203 11:29:27.116773 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rmcgx\" (UID: \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\") " pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" Dec 03 11:29:27 crc kubenswrapper[4756]: I1203 11:29:27.119359 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rmcgx\" (UID: \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\") " pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" Dec 03 11:29:27 crc kubenswrapper[4756]: I1203 11:29:27.134378 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8pk9\" (UniqueName: \"kubernetes.io/projected/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-kube-api-access-m8pk9\") pod \"ssh-known-hosts-edpm-deployment-rmcgx\" (UID: \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\") " pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" Dec 03 11:29:27 crc kubenswrapper[4756]: I1203 11:29:27.237373 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" Dec 03 11:29:27 crc kubenswrapper[4756]: I1203 11:29:27.859653 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rmcgx"] Dec 03 11:29:28 crc kubenswrapper[4756]: I1203 11:29:28.844270 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" event={"ID":"5b63b8f2-0259-45ce-b2c3-18b043ee2fab","Type":"ContainerStarted","Data":"3e2fc53a29cdac66ded5fdc8123dc3066252455b0fa06d7a2998fa6cbda1d17a"} Dec 03 11:29:28 crc kubenswrapper[4756]: I1203 11:29:28.845435 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" event={"ID":"5b63b8f2-0259-45ce-b2c3-18b043ee2fab","Type":"ContainerStarted","Data":"c3177e51201c1eb5117f74b6a10c2d5028d6dbab89f1c9b05f67e0768b62f268"} Dec 03 11:29:28 crc kubenswrapper[4756]: I1203 11:29:28.871606 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" podStartSLOduration=2.415995689 podStartE2EDuration="2.871577169s" podCreationTimestamp="2025-12-03 11:29:26 +0000 UTC" firstStartedPulling="2025-12-03 11:29:27.86932396 +0000 UTC m=+2178.899325204" lastFinishedPulling="2025-12-03 11:29:28.32490544 +0000 UTC m=+2179.354906684" observedRunningTime="2025-12-03 11:29:28.862847195 +0000 UTC m=+2179.892848439" watchObservedRunningTime="2025-12-03 11:29:28.871577169 +0000 UTC m=+2179.901578413" Dec 03 11:29:35 crc kubenswrapper[4756]: I1203 11:29:35.933398 4756 generic.go:334] "Generic (PLEG): container finished" podID="5b63b8f2-0259-45ce-b2c3-18b043ee2fab" containerID="3e2fc53a29cdac66ded5fdc8123dc3066252455b0fa06d7a2998fa6cbda1d17a" exitCode=0 Dec 03 11:29:35 crc kubenswrapper[4756]: I1203 11:29:35.933532 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" event={"ID":"5b63b8f2-0259-45ce-b2c3-18b043ee2fab","Type":"ContainerDied","Data":"3e2fc53a29cdac66ded5fdc8123dc3066252455b0fa06d7a2998fa6cbda1d17a"} Dec 03 11:29:37 crc kubenswrapper[4756]: I1203 11:29:37.436833 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" Dec 03 11:29:37 crc kubenswrapper[4756]: I1203 11:29:37.587865 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-ssh-key-openstack-edpm-ipam\") pod \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\" (UID: \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\") " Dec 03 11:29:37 crc kubenswrapper[4756]: I1203 11:29:37.588009 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8pk9\" (UniqueName: \"kubernetes.io/projected/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-kube-api-access-m8pk9\") pod \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\" (UID: \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\") " Dec 03 11:29:37 crc kubenswrapper[4756]: I1203 11:29:37.588164 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-inventory-0\") pod \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\" (UID: \"5b63b8f2-0259-45ce-b2c3-18b043ee2fab\") " Dec 03 11:29:37 crc kubenswrapper[4756]: I1203 11:29:37.603460 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-kube-api-access-m8pk9" (OuterVolumeSpecName: "kube-api-access-m8pk9") pod "5b63b8f2-0259-45ce-b2c3-18b043ee2fab" (UID: "5b63b8f2-0259-45ce-b2c3-18b043ee2fab"). InnerVolumeSpecName "kube-api-access-m8pk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:29:37 crc kubenswrapper[4756]: I1203 11:29:37.621703 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5b63b8f2-0259-45ce-b2c3-18b043ee2fab" (UID: "5b63b8f2-0259-45ce-b2c3-18b043ee2fab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:29:37 crc kubenswrapper[4756]: I1203 11:29:37.624619 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "5b63b8f2-0259-45ce-b2c3-18b043ee2fab" (UID: "5b63b8f2-0259-45ce-b2c3-18b043ee2fab"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:29:37 crc kubenswrapper[4756]: I1203 11:29:37.691750 4756 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:29:37 crc kubenswrapper[4756]: I1203 11:29:37.691811 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 03 11:29:37 crc kubenswrapper[4756]: I1203 11:29:37.691830 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8pk9\" (UniqueName: \"kubernetes.io/projected/5b63b8f2-0259-45ce-b2c3-18b043ee2fab-kube-api-access-m8pk9\") on node \"crc\" DevicePath \"\"" Dec 03 11:29:37 crc kubenswrapper[4756]: I1203 11:29:37.959133 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" event={"ID":"5b63b8f2-0259-45ce-b2c3-18b043ee2fab","Type":"ContainerDied","Data":"c3177e51201c1eb5117f74b6a10c2d5028d6dbab89f1c9b05f67e0768b62f268"} Dec 03 11:29:37 crc kubenswrapper[4756]: I1203 11:29:37.959184 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3177e51201c1eb5117f74b6a10c2d5028d6dbab89f1c9b05f67e0768b62f268" Dec 03 11:29:37 crc kubenswrapper[4756]: I1203 11:29:37.959208 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rmcgx" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.049522 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj"] Dec 03 11:29:38 crc kubenswrapper[4756]: E1203 11:29:38.050057 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b63b8f2-0259-45ce-b2c3-18b043ee2fab" containerName="ssh-known-hosts-edpm-deployment" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.050082 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b63b8f2-0259-45ce-b2c3-18b043ee2fab" containerName="ssh-known-hosts-edpm-deployment" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.050357 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b63b8f2-0259-45ce-b2c3-18b043ee2fab" containerName="ssh-known-hosts-edpm-deployment" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.051122 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.055393 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.057434 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.057739 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.058008 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.064027 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj"] Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.208714 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6252d786-f230-4eed-bc78-b688e07c12e7-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wrchj\" (UID: \"6252d786-f230-4eed-bc78-b688e07c12e7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.208857 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjpxr\" (UniqueName: \"kubernetes.io/projected/6252d786-f230-4eed-bc78-b688e07c12e7-kube-api-access-tjpxr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wrchj\" (UID: \"6252d786-f230-4eed-bc78-b688e07c12e7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.209242 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6252d786-f230-4eed-bc78-b688e07c12e7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wrchj\" (UID: \"6252d786-f230-4eed-bc78-b688e07c12e7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.311374 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6252d786-f230-4eed-bc78-b688e07c12e7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wrchj\" (UID: \"6252d786-f230-4eed-bc78-b688e07c12e7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.311478 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6252d786-f230-4eed-bc78-b688e07c12e7-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wrchj\" (UID: \"6252d786-f230-4eed-bc78-b688e07c12e7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.311526 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjpxr\" (UniqueName: \"kubernetes.io/projected/6252d786-f230-4eed-bc78-b688e07c12e7-kube-api-access-tjpxr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wrchj\" (UID: \"6252d786-f230-4eed-bc78-b688e07c12e7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.317498 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6252d786-f230-4eed-bc78-b688e07c12e7-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wrchj\" (UID: \"6252d786-f230-4eed-bc78-b688e07c12e7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.317776 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6252d786-f230-4eed-bc78-b688e07c12e7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wrchj\" (UID: \"6252d786-f230-4eed-bc78-b688e07c12e7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.330640 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjpxr\" (UniqueName: \"kubernetes.io/projected/6252d786-f230-4eed-bc78-b688e07c12e7-kube-api-access-tjpxr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wrchj\" (UID: \"6252d786-f230-4eed-bc78-b688e07c12e7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.401675 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" Dec 03 11:29:38 crc kubenswrapper[4756]: I1203 11:29:38.970692 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj"] Dec 03 11:29:39 crc kubenswrapper[4756]: I1203 11:29:39.982630 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" event={"ID":"6252d786-f230-4eed-bc78-b688e07c12e7","Type":"ContainerStarted","Data":"b02c06ea3fb5c7346e6e63faa83ded99bba5a08bff9968f5493a84c09c136c5a"} Dec 03 11:29:39 crc kubenswrapper[4756]: I1203 11:29:39.983020 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" event={"ID":"6252d786-f230-4eed-bc78-b688e07c12e7","Type":"ContainerStarted","Data":"c1c74463f1e4873b68f25845f6498274b89930bacc68402950901f70010b30a5"} Dec 03 11:29:40 crc kubenswrapper[4756]: I1203 11:29:40.010565 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" podStartSLOduration=1.414963172 podStartE2EDuration="2.010541987s" podCreationTimestamp="2025-12-03 11:29:38 +0000 UTC" firstStartedPulling="2025-12-03 11:29:38.982699125 +0000 UTC m=+2190.012700369" lastFinishedPulling="2025-12-03 11:29:39.57827794 +0000 UTC m=+2190.608279184" observedRunningTime="2025-12-03 11:29:40.001098931 +0000 UTC m=+2191.031100175" watchObservedRunningTime="2025-12-03 11:29:40.010541987 +0000 UTC m=+2191.040543231" Dec 03 11:29:49 crc kubenswrapper[4756]: I1203 11:29:49.075601 4756 generic.go:334] "Generic (PLEG): container finished" podID="6252d786-f230-4eed-bc78-b688e07c12e7" containerID="b02c06ea3fb5c7346e6e63faa83ded99bba5a08bff9968f5493a84c09c136c5a" exitCode=0 Dec 03 11:29:49 crc kubenswrapper[4756]: I1203 11:29:49.075677 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" event={"ID":"6252d786-f230-4eed-bc78-b688e07c12e7","Type":"ContainerDied","Data":"b02c06ea3fb5c7346e6e63faa83ded99bba5a08bff9968f5493a84c09c136c5a"} Dec 03 11:29:50 crc kubenswrapper[4756]: I1203 11:29:50.526812 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" Dec 03 11:29:50 crc kubenswrapper[4756]: I1203 11:29:50.652368 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6252d786-f230-4eed-bc78-b688e07c12e7-ssh-key\") pod \"6252d786-f230-4eed-bc78-b688e07c12e7\" (UID: \"6252d786-f230-4eed-bc78-b688e07c12e7\") " Dec 03 11:29:50 crc kubenswrapper[4756]: I1203 11:29:50.652492 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6252d786-f230-4eed-bc78-b688e07c12e7-inventory\") pod \"6252d786-f230-4eed-bc78-b688e07c12e7\" (UID: \"6252d786-f230-4eed-bc78-b688e07c12e7\") " Dec 03 11:29:50 crc kubenswrapper[4756]: I1203 11:29:50.652603 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjpxr\" (UniqueName: \"kubernetes.io/projected/6252d786-f230-4eed-bc78-b688e07c12e7-kube-api-access-tjpxr\") pod \"6252d786-f230-4eed-bc78-b688e07c12e7\" (UID: \"6252d786-f230-4eed-bc78-b688e07c12e7\") " Dec 03 11:29:50 crc kubenswrapper[4756]: I1203 11:29:50.661534 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6252d786-f230-4eed-bc78-b688e07c12e7-kube-api-access-tjpxr" (OuterVolumeSpecName: "kube-api-access-tjpxr") pod "6252d786-f230-4eed-bc78-b688e07c12e7" (UID: "6252d786-f230-4eed-bc78-b688e07c12e7"). InnerVolumeSpecName "kube-api-access-tjpxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:29:50 crc kubenswrapper[4756]: I1203 11:29:50.690087 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6252d786-f230-4eed-bc78-b688e07c12e7-inventory" (OuterVolumeSpecName: "inventory") pod "6252d786-f230-4eed-bc78-b688e07c12e7" (UID: "6252d786-f230-4eed-bc78-b688e07c12e7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:29:50 crc kubenswrapper[4756]: I1203 11:29:50.692678 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6252d786-f230-4eed-bc78-b688e07c12e7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6252d786-f230-4eed-bc78-b688e07c12e7" (UID: "6252d786-f230-4eed-bc78-b688e07c12e7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:29:50 crc kubenswrapper[4756]: I1203 11:29:50.755645 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6252d786-f230-4eed-bc78-b688e07c12e7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:29:50 crc kubenswrapper[4756]: I1203 11:29:50.755716 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6252d786-f230-4eed-bc78-b688e07c12e7-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:29:50 crc kubenswrapper[4756]: I1203 11:29:50.755733 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjpxr\" (UniqueName: \"kubernetes.io/projected/6252d786-f230-4eed-bc78-b688e07c12e7-kube-api-access-tjpxr\") on node \"crc\" DevicePath \"\"" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.100571 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" event={"ID":"6252d786-f230-4eed-bc78-b688e07c12e7","Type":"ContainerDied","Data":"c1c74463f1e4873b68f25845f6498274b89930bacc68402950901f70010b30a5"} Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.101166 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1c74463f1e4873b68f25845f6498274b89930bacc68402950901f70010b30a5" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.100653 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wrchj" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.250511 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj"] Dec 03 11:29:51 crc kubenswrapper[4756]: E1203 11:29:51.255802 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6252d786-f230-4eed-bc78-b688e07c12e7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.263280 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6252d786-f230-4eed-bc78-b688e07c12e7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.264526 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="6252d786-f230-4eed-bc78-b688e07c12e7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.265653 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj"] Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.265901 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.269623 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.269915 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.270383 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.273230 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.373675 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e9f7a37-9bb0-4f3e-bdfd-962164857651-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj\" (UID: \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.373792 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e9f7a37-9bb0-4f3e-bdfd-962164857651-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj\" (UID: \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.374029 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v99bn\" (UniqueName: \"kubernetes.io/projected/8e9f7a37-9bb0-4f3e-bdfd-962164857651-kube-api-access-v99bn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj\" (UID: \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.476470 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v99bn\" (UniqueName: \"kubernetes.io/projected/8e9f7a37-9bb0-4f3e-bdfd-962164857651-kube-api-access-v99bn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj\" (UID: \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.476657 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e9f7a37-9bb0-4f3e-bdfd-962164857651-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj\" (UID: \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.476707 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e9f7a37-9bb0-4f3e-bdfd-962164857651-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj\" (UID: \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.482336 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e9f7a37-9bb0-4f3e-bdfd-962164857651-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj\" (UID: \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.484973 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e9f7a37-9bb0-4f3e-bdfd-962164857651-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj\" (UID: \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.494590 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v99bn\" (UniqueName: \"kubernetes.io/projected/8e9f7a37-9bb0-4f3e-bdfd-962164857651-kube-api-access-v99bn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj\" (UID: \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" Dec 03 11:29:51 crc kubenswrapper[4756]: I1203 11:29:51.589402 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" Dec 03 11:29:52 crc kubenswrapper[4756]: I1203 11:29:52.133768 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj"] Dec 03 11:29:53 crc kubenswrapper[4756]: I1203 11:29:53.119048 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" event={"ID":"8e9f7a37-9bb0-4f3e-bdfd-962164857651","Type":"ContainerStarted","Data":"dd12b78adc7987b75649469875f104bcc55c5e684adf3983fa0f5b7ea5a1b6c5"} Dec 03 11:29:54 crc kubenswrapper[4756]: I1203 11:29:54.129357 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" event={"ID":"8e9f7a37-9bb0-4f3e-bdfd-962164857651","Type":"ContainerStarted","Data":"a3937733a177e675299535f2555c54905011a52c4f30cdb7bda6421d7ea87217"} Dec 03 11:29:54 crc kubenswrapper[4756]: I1203 11:29:54.151527 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" podStartSLOduration=2.434893299 podStartE2EDuration="3.151503192s" podCreationTimestamp="2025-12-03 11:29:51 +0000 UTC" firstStartedPulling="2025-12-03 11:29:52.141171462 +0000 UTC m=+2203.171172706" lastFinishedPulling="2025-12-03 11:29:52.857781355 +0000 UTC m=+2203.887782599" observedRunningTime="2025-12-03 11:29:54.144657608 +0000 UTC m=+2205.174658872" watchObservedRunningTime="2025-12-03 11:29:54.151503192 +0000 UTC m=+2205.181504456" Dec 03 11:29:57 crc kubenswrapper[4756]: I1203 11:29:57.400686 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-46scl"] Dec 03 11:29:57 crc kubenswrapper[4756]: I1203 11:29:57.403865 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:29:57 crc kubenswrapper[4756]: I1203 11:29:57.419442 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-46scl"] Dec 03 11:29:57 crc kubenswrapper[4756]: I1203 11:29:57.514392 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e860160e-696f-4155-8122-4c65d046322e-utilities\") pod \"redhat-marketplace-46scl\" (UID: \"e860160e-696f-4155-8122-4c65d046322e\") " pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:29:57 crc kubenswrapper[4756]: I1203 11:29:57.514518 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9q4k\" (UniqueName: \"kubernetes.io/projected/e860160e-696f-4155-8122-4c65d046322e-kube-api-access-p9q4k\") pod \"redhat-marketplace-46scl\" (UID: \"e860160e-696f-4155-8122-4c65d046322e\") " pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:29:57 crc kubenswrapper[4756]: I1203 11:29:57.514591 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e860160e-696f-4155-8122-4c65d046322e-catalog-content\") pod \"redhat-marketplace-46scl\" (UID: \"e860160e-696f-4155-8122-4c65d046322e\") " pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:29:57 crc kubenswrapper[4756]: I1203 11:29:57.617343 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e860160e-696f-4155-8122-4c65d046322e-utilities\") pod \"redhat-marketplace-46scl\" (UID: \"e860160e-696f-4155-8122-4c65d046322e\") " pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:29:57 crc kubenswrapper[4756]: I1203 11:29:57.617449 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9q4k\" (UniqueName: \"kubernetes.io/projected/e860160e-696f-4155-8122-4c65d046322e-kube-api-access-p9q4k\") pod \"redhat-marketplace-46scl\" (UID: \"e860160e-696f-4155-8122-4c65d046322e\") " pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:29:57 crc kubenswrapper[4756]: I1203 11:29:57.617526 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e860160e-696f-4155-8122-4c65d046322e-catalog-content\") pod \"redhat-marketplace-46scl\" (UID: \"e860160e-696f-4155-8122-4c65d046322e\") " pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:29:57 crc kubenswrapper[4756]: I1203 11:29:57.618269 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e860160e-696f-4155-8122-4c65d046322e-utilities\") pod \"redhat-marketplace-46scl\" (UID: \"e860160e-696f-4155-8122-4c65d046322e\") " pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:29:57 crc kubenswrapper[4756]: I1203 11:29:57.618334 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e860160e-696f-4155-8122-4c65d046322e-catalog-content\") pod \"redhat-marketplace-46scl\" (UID: \"e860160e-696f-4155-8122-4c65d046322e\") " pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:29:57 crc kubenswrapper[4756]: I1203 11:29:57.643582 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9q4k\" (UniqueName: \"kubernetes.io/projected/e860160e-696f-4155-8122-4c65d046322e-kube-api-access-p9q4k\") pod \"redhat-marketplace-46scl\" (UID: \"e860160e-696f-4155-8122-4c65d046322e\") " pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:29:57 crc kubenswrapper[4756]: I1203 11:29:57.728584 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:29:58 crc kubenswrapper[4756]: I1203 11:29:58.237415 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-46scl"] Dec 03 11:29:59 crc kubenswrapper[4756]: I1203 11:29:59.180766 4756 generic.go:334] "Generic (PLEG): container finished" podID="e860160e-696f-4155-8122-4c65d046322e" containerID="97f7b44ecd8ce41195dc248d4c7d7d2d5a600ec031e40ce252673cc224020e53" exitCode=0 Dec 03 11:29:59 crc kubenswrapper[4756]: I1203 11:29:59.181186 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-46scl" event={"ID":"e860160e-696f-4155-8122-4c65d046322e","Type":"ContainerDied","Data":"97f7b44ecd8ce41195dc248d4c7d7d2d5a600ec031e40ce252673cc224020e53"} Dec 03 11:29:59 crc kubenswrapper[4756]: I1203 11:29:59.181223 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-46scl" event={"ID":"e860160e-696f-4155-8122-4c65d046322e","Type":"ContainerStarted","Data":"5b97726e96b16243afeaf259caa279f50bfa3fad0d705461f086b4c684e9a71f"} Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.147361 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw"] Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.149913 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.152637 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.154736 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.164157 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw"] Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.205758 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-46scl" event={"ID":"e860160e-696f-4155-8122-4c65d046322e","Type":"ContainerStarted","Data":"9d24e22ea28bb66b53b42374453b421f11e85f6c74f4c7747aabbd7d82384562"} Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.280367 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxbs6\" (UniqueName: \"kubernetes.io/projected/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-kube-api-access-cxbs6\") pod \"collect-profiles-29412690-hsksw\" (UID: \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.280453 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-config-volume\") pod \"collect-profiles-29412690-hsksw\" (UID: \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.280517 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-secret-volume\") pod \"collect-profiles-29412690-hsksw\" (UID: \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.383022 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxbs6\" (UniqueName: \"kubernetes.io/projected/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-kube-api-access-cxbs6\") pod \"collect-profiles-29412690-hsksw\" (UID: \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.383187 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-config-volume\") pod \"collect-profiles-29412690-hsksw\" (UID: \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.383279 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-secret-volume\") pod \"collect-profiles-29412690-hsksw\" (UID: \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.385302 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-config-volume\") pod \"collect-profiles-29412690-hsksw\" (UID: \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.391820 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-secret-volume\") pod \"collect-profiles-29412690-hsksw\" (UID: \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.405981 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxbs6\" (UniqueName: \"kubernetes.io/projected/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-kube-api-access-cxbs6\") pod \"collect-profiles-29412690-hsksw\" (UID: \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.491580 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" Dec 03 11:30:00 crc kubenswrapper[4756]: I1203 11:30:00.974786 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw"] Dec 03 11:30:00 crc kubenswrapper[4756]: W1203 11:30:00.978531 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b2e2743_9acd_49cc_b3e8_691cf0d6dfcb.slice/crio-5dc5f1b0ae7a9dbef464e05b1fe8d21552f08da976e94b2901ef8e8b543a8951 WatchSource:0}: Error finding container 5dc5f1b0ae7a9dbef464e05b1fe8d21552f08da976e94b2901ef8e8b543a8951: Status 404 returned error can't find the container with id 5dc5f1b0ae7a9dbef464e05b1fe8d21552f08da976e94b2901ef8e8b543a8951 Dec 03 11:30:01 crc kubenswrapper[4756]: I1203 11:30:01.229164 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" event={"ID":"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb","Type":"ContainerStarted","Data":"8be9885a9aea0c9db62824610dd8adc409f66b1a592cdf2f455f1f88acd669cc"} Dec 03 11:30:01 crc kubenswrapper[4756]: I1203 11:30:01.229217 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" event={"ID":"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb","Type":"ContainerStarted","Data":"5dc5f1b0ae7a9dbef464e05b1fe8d21552f08da976e94b2901ef8e8b543a8951"} Dec 03 11:30:01 crc kubenswrapper[4756]: I1203 11:30:01.237152 4756 generic.go:334] "Generic (PLEG): container finished" podID="e860160e-696f-4155-8122-4c65d046322e" containerID="9d24e22ea28bb66b53b42374453b421f11e85f6c74f4c7747aabbd7d82384562" exitCode=0 Dec 03 11:30:01 crc kubenswrapper[4756]: I1203 11:30:01.254742 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-46scl" event={"ID":"e860160e-696f-4155-8122-4c65d046322e","Type":"ContainerDied","Data":"9d24e22ea28bb66b53b42374453b421f11e85f6c74f4c7747aabbd7d82384562"} Dec 03 11:30:01 crc kubenswrapper[4756]: I1203 11:30:01.266989 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" podStartSLOduration=1.26693894 podStartE2EDuration="1.26693894s" podCreationTimestamp="2025-12-03 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 11:30:01.249733791 +0000 UTC m=+2212.279735045" watchObservedRunningTime="2025-12-03 11:30:01.26693894 +0000 UTC m=+2212.296940204" Dec 03 11:30:02 crc kubenswrapper[4756]: I1203 11:30:02.248360 4756 generic.go:334] "Generic (PLEG): container finished" podID="3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb" containerID="8be9885a9aea0c9db62824610dd8adc409f66b1a592cdf2f455f1f88acd669cc" exitCode=0 Dec 03 11:30:02 crc kubenswrapper[4756]: I1203 11:30:02.248483 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" event={"ID":"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb","Type":"ContainerDied","Data":"8be9885a9aea0c9db62824610dd8adc409f66b1a592cdf2f455f1f88acd669cc"} Dec 03 11:30:02 crc kubenswrapper[4756]: I1203 11:30:02.252805 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-46scl" event={"ID":"e860160e-696f-4155-8122-4c65d046322e","Type":"ContainerStarted","Data":"4fc2bb20a2734674319e900097695762b5b504430b461ba31ae9d4fc4da3f91b"} Dec 03 11:30:02 crc kubenswrapper[4756]: I1203 11:30:02.287359 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-46scl" podStartSLOduration=2.77368721 podStartE2EDuration="5.287328768s" podCreationTimestamp="2025-12-03 11:29:57 +0000 UTC" firstStartedPulling="2025-12-03 11:29:59.185190789 +0000 UTC m=+2210.215192043" lastFinishedPulling="2025-12-03 11:30:01.698832357 +0000 UTC m=+2212.728833601" observedRunningTime="2025-12-03 11:30:02.285430969 +0000 UTC m=+2213.315432213" watchObservedRunningTime="2025-12-03 11:30:02.287328768 +0000 UTC m=+2213.317330012" Dec 03 11:30:03 crc kubenswrapper[4756]: I1203 11:30:03.597837 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" Dec 03 11:30:03 crc kubenswrapper[4756]: I1203 11:30:03.761753 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxbs6\" (UniqueName: \"kubernetes.io/projected/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-kube-api-access-cxbs6\") pod \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\" (UID: \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\") " Dec 03 11:30:03 crc kubenswrapper[4756]: I1203 11:30:03.762639 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-secret-volume\") pod \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\" (UID: \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\") " Dec 03 11:30:03 crc kubenswrapper[4756]: I1203 11:30:03.762677 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-config-volume\") pod \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\" (UID: \"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb\") " Dec 03 11:30:03 crc kubenswrapper[4756]: I1203 11:30:03.763485 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-config-volume" (OuterVolumeSpecName: "config-volume") pod "3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb" (UID: "3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:30:03 crc kubenswrapper[4756]: I1203 11:30:03.770189 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-kube-api-access-cxbs6" (OuterVolumeSpecName: "kube-api-access-cxbs6") pod "3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb" (UID: "3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb"). InnerVolumeSpecName "kube-api-access-cxbs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:30:03 crc kubenswrapper[4756]: I1203 11:30:03.773827 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb" (UID: "3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:30:03 crc kubenswrapper[4756]: I1203 11:30:03.865764 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxbs6\" (UniqueName: \"kubernetes.io/projected/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-kube-api-access-cxbs6\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:03 crc kubenswrapper[4756]: I1203 11:30:03.865812 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:03 crc kubenswrapper[4756]: I1203 11:30:03.865822 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:04 crc kubenswrapper[4756]: I1203 11:30:04.274743 4756 generic.go:334] "Generic (PLEG): container finished" podID="8e9f7a37-9bb0-4f3e-bdfd-962164857651" containerID="a3937733a177e675299535f2555c54905011a52c4f30cdb7bda6421d7ea87217" exitCode=0 Dec 03 11:30:04 crc kubenswrapper[4756]: I1203 11:30:04.274840 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" event={"ID":"8e9f7a37-9bb0-4f3e-bdfd-962164857651","Type":"ContainerDied","Data":"a3937733a177e675299535f2555c54905011a52c4f30cdb7bda6421d7ea87217"} Dec 03 11:30:04 crc kubenswrapper[4756]: I1203 11:30:04.280267 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" event={"ID":"3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb","Type":"ContainerDied","Data":"5dc5f1b0ae7a9dbef464e05b1fe8d21552f08da976e94b2901ef8e8b543a8951"} Dec 03 11:30:04 crc kubenswrapper[4756]: I1203 11:30:04.280330 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dc5f1b0ae7a9dbef464e05b1fe8d21552f08da976e94b2901ef8e8b543a8951" Dec 03 11:30:04 crc kubenswrapper[4756]: I1203 11:30:04.280413 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw" Dec 03 11:30:04 crc kubenswrapper[4756]: I1203 11:30:04.354873 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4"] Dec 03 11:30:04 crc kubenswrapper[4756]: I1203 11:30:04.364394 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412645-fnqc4"] Dec 03 11:30:05 crc kubenswrapper[4756]: I1203 11:30:05.250407 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ce33fc3-643b-4347-aa7f-31fcfe461b1d" path="/var/lib/kubelet/pods/6ce33fc3-643b-4347-aa7f-31fcfe461b1d/volumes" Dec 03 11:30:05 crc kubenswrapper[4756]: I1203 11:30:05.713975 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" Dec 03 11:30:05 crc kubenswrapper[4756]: I1203 11:30:05.815982 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v99bn\" (UniqueName: \"kubernetes.io/projected/8e9f7a37-9bb0-4f3e-bdfd-962164857651-kube-api-access-v99bn\") pod \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\" (UID: \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\") " Dec 03 11:30:05 crc kubenswrapper[4756]: I1203 11:30:05.816815 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e9f7a37-9bb0-4f3e-bdfd-962164857651-ssh-key\") pod \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\" (UID: \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\") " Dec 03 11:30:05 crc kubenswrapper[4756]: I1203 11:30:05.817010 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e9f7a37-9bb0-4f3e-bdfd-962164857651-inventory\") pod \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\" (UID: \"8e9f7a37-9bb0-4f3e-bdfd-962164857651\") " Dec 03 11:30:05 crc kubenswrapper[4756]: I1203 11:30:05.824325 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9f7a37-9bb0-4f3e-bdfd-962164857651-kube-api-access-v99bn" (OuterVolumeSpecName: "kube-api-access-v99bn") pod "8e9f7a37-9bb0-4f3e-bdfd-962164857651" (UID: "8e9f7a37-9bb0-4f3e-bdfd-962164857651"). InnerVolumeSpecName "kube-api-access-v99bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:30:05 crc kubenswrapper[4756]: I1203 11:30:05.856515 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e9f7a37-9bb0-4f3e-bdfd-962164857651-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8e9f7a37-9bb0-4f3e-bdfd-962164857651" (UID: "8e9f7a37-9bb0-4f3e-bdfd-962164857651"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:30:05 crc kubenswrapper[4756]: I1203 11:30:05.871662 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e9f7a37-9bb0-4f3e-bdfd-962164857651-inventory" (OuterVolumeSpecName: "inventory") pod "8e9f7a37-9bb0-4f3e-bdfd-962164857651" (UID: "8e9f7a37-9bb0-4f3e-bdfd-962164857651"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:30:05 crc kubenswrapper[4756]: I1203 11:30:05.920229 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v99bn\" (UniqueName: \"kubernetes.io/projected/8e9f7a37-9bb0-4f3e-bdfd-962164857651-kube-api-access-v99bn\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:05 crc kubenswrapper[4756]: I1203 11:30:05.920304 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e9f7a37-9bb0-4f3e-bdfd-962164857651-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:05 crc kubenswrapper[4756]: I1203 11:30:05.920319 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e9f7a37-9bb0-4f3e-bdfd-962164857651-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.304820 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" event={"ID":"8e9f7a37-9bb0-4f3e-bdfd-962164857651","Type":"ContainerDied","Data":"dd12b78adc7987b75649469875f104bcc55c5e684adf3983fa0f5b7ea5a1b6c5"} Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.304889 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd12b78adc7987b75649469875f104bcc55c5e684adf3983fa0f5b7ea5a1b6c5" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.304917 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.406383 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs"] Dec 03 11:30:06 crc kubenswrapper[4756]: E1203 11:30:06.406946 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9f7a37-9bb0-4f3e-bdfd-962164857651" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.406995 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9f7a37-9bb0-4f3e-bdfd-962164857651" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 11:30:06 crc kubenswrapper[4756]: E1203 11:30:06.407016 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb" containerName="collect-profiles" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.407024 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb" containerName="collect-profiles" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.407332 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb" containerName="collect-profiles" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.407371 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9f7a37-9bb0-4f3e-bdfd-962164857651" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.408563 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.414346 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.414503 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.414517 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.414643 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.414715 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.414853 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.415155 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.418182 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.424974 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs"] Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.533863 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.533999 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.534119 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.534179 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.534265 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.534336 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.534451 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.534483 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.534538 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tw7f\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-kube-api-access-6tw7f\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.534618 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.534691 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.534754 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.534780 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.534837 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.636576 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.636632 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.636667 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.636695 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.636739 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.636773 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.636811 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.636829 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.636848 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tw7f\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-kube-api-access-6tw7f\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.636874 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.636898 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.636935 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.636975 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.637085 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.644300 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.644440 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.644646 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.645043 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.645327 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.645568 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.646707 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.647277 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.647441 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.648295 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.648498 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.650023 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.650136 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.663273 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tw7f\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-kube-api-access-6tw7f\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:06 crc kubenswrapper[4756]: I1203 11:30:06.738894 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:07 crc kubenswrapper[4756]: I1203 11:30:07.353012 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs"] Dec 03 11:30:07 crc kubenswrapper[4756]: I1203 11:30:07.729656 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:30:07 crc kubenswrapper[4756]: I1203 11:30:07.729776 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:30:07 crc kubenswrapper[4756]: I1203 11:30:07.785390 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:30:08 crc kubenswrapper[4756]: I1203 11:30:08.328041 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" event={"ID":"42d479c1-c36c-4481-a4e1-7e155283859d","Type":"ContainerStarted","Data":"930d21c63a36210bc502bb4b7ff033b7dc300953ee8f215a633cb37a4c2b457d"} Dec 03 11:30:08 crc kubenswrapper[4756]: I1203 11:30:08.328612 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" event={"ID":"42d479c1-c36c-4481-a4e1-7e155283859d","Type":"ContainerStarted","Data":"d59f3e5d89f7abcc0cf81b7a4badb7db9d800f16f91de25a4fd1bb3f925e34de"} Dec 03 11:30:08 crc kubenswrapper[4756]: I1203 11:30:08.357705 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" podStartSLOduration=1.705037959 podStartE2EDuration="2.357674584s" podCreationTimestamp="2025-12-03 11:30:06 +0000 UTC" firstStartedPulling="2025-12-03 11:30:07.360288498 +0000 UTC m=+2218.390289742" lastFinishedPulling="2025-12-03 11:30:08.012925083 +0000 UTC m=+2219.042926367" observedRunningTime="2025-12-03 11:30:08.352208642 +0000 UTC m=+2219.382209906" watchObservedRunningTime="2025-12-03 11:30:08.357674584 +0000 UTC m=+2219.387675828" Dec 03 11:30:08 crc kubenswrapper[4756]: I1203 11:30:08.385009 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:30:09 crc kubenswrapper[4756]: I1203 11:30:09.963086 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-46scl"] Dec 03 11:30:10 crc kubenswrapper[4756]: I1203 11:30:10.345528 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-46scl" podUID="e860160e-696f-4155-8122-4c65d046322e" containerName="registry-server" containerID="cri-o://4fc2bb20a2734674319e900097695762b5b504430b461ba31ae9d4fc4da3f91b" gracePeriod=2 Dec 03 11:30:10 crc kubenswrapper[4756]: I1203 11:30:10.850051 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:30:10 crc kubenswrapper[4756]: I1203 11:30:10.964116 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e860160e-696f-4155-8122-4c65d046322e-catalog-content\") pod \"e860160e-696f-4155-8122-4c65d046322e\" (UID: \"e860160e-696f-4155-8122-4c65d046322e\") " Dec 03 11:30:10 crc kubenswrapper[4756]: I1203 11:30:10.964195 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e860160e-696f-4155-8122-4c65d046322e-utilities\") pod \"e860160e-696f-4155-8122-4c65d046322e\" (UID: \"e860160e-696f-4155-8122-4c65d046322e\") " Dec 03 11:30:10 crc kubenswrapper[4756]: I1203 11:30:10.964262 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9q4k\" (UniqueName: \"kubernetes.io/projected/e860160e-696f-4155-8122-4c65d046322e-kube-api-access-p9q4k\") pod \"e860160e-696f-4155-8122-4c65d046322e\" (UID: \"e860160e-696f-4155-8122-4c65d046322e\") " Dec 03 11:30:10 crc kubenswrapper[4756]: I1203 11:30:10.965290 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e860160e-696f-4155-8122-4c65d046322e-utilities" (OuterVolumeSpecName: "utilities") pod "e860160e-696f-4155-8122-4c65d046322e" (UID: "e860160e-696f-4155-8122-4c65d046322e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:30:10 crc kubenswrapper[4756]: I1203 11:30:10.973539 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e860160e-696f-4155-8122-4c65d046322e-kube-api-access-p9q4k" (OuterVolumeSpecName: "kube-api-access-p9q4k") pod "e860160e-696f-4155-8122-4c65d046322e" (UID: "e860160e-696f-4155-8122-4c65d046322e"). InnerVolumeSpecName "kube-api-access-p9q4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:30:10 crc kubenswrapper[4756]: I1203 11:30:10.986829 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e860160e-696f-4155-8122-4c65d046322e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e860160e-696f-4155-8122-4c65d046322e" (UID: "e860160e-696f-4155-8122-4c65d046322e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.067176 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e860160e-696f-4155-8122-4c65d046322e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.067233 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e860160e-696f-4155-8122-4c65d046322e-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.067248 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9q4k\" (UniqueName: \"kubernetes.io/projected/e860160e-696f-4155-8122-4c65d046322e-kube-api-access-p9q4k\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.358070 4756 generic.go:334] "Generic (PLEG): container finished" podID="e860160e-696f-4155-8122-4c65d046322e" containerID="4fc2bb20a2734674319e900097695762b5b504430b461ba31ae9d4fc4da3f91b" exitCode=0 Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.358134 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-46scl" event={"ID":"e860160e-696f-4155-8122-4c65d046322e","Type":"ContainerDied","Data":"4fc2bb20a2734674319e900097695762b5b504430b461ba31ae9d4fc4da3f91b"} Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.358179 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-46scl" Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.358184 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-46scl" event={"ID":"e860160e-696f-4155-8122-4c65d046322e","Type":"ContainerDied","Data":"5b97726e96b16243afeaf259caa279f50bfa3fad0d705461f086b4c684e9a71f"} Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.358221 4756 scope.go:117] "RemoveContainer" containerID="4fc2bb20a2734674319e900097695762b5b504430b461ba31ae9d4fc4da3f91b" Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.389463 4756 scope.go:117] "RemoveContainer" containerID="9d24e22ea28bb66b53b42374453b421f11e85f6c74f4c7747aabbd7d82384562" Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.396344 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-46scl"] Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.408809 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-46scl"] Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.429006 4756 scope.go:117] "RemoveContainer" containerID="97f7b44ecd8ce41195dc248d4c7d7d2d5a600ec031e40ce252673cc224020e53" Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.463204 4756 scope.go:117] "RemoveContainer" containerID="4fc2bb20a2734674319e900097695762b5b504430b461ba31ae9d4fc4da3f91b" Dec 03 11:30:11 crc kubenswrapper[4756]: E1203 11:30:11.463720 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc2bb20a2734674319e900097695762b5b504430b461ba31ae9d4fc4da3f91b\": container with ID starting with 4fc2bb20a2734674319e900097695762b5b504430b461ba31ae9d4fc4da3f91b not found: ID does not exist" containerID="4fc2bb20a2734674319e900097695762b5b504430b461ba31ae9d4fc4da3f91b" Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.463772 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc2bb20a2734674319e900097695762b5b504430b461ba31ae9d4fc4da3f91b"} err="failed to get container status \"4fc2bb20a2734674319e900097695762b5b504430b461ba31ae9d4fc4da3f91b\": rpc error: code = NotFound desc = could not find container \"4fc2bb20a2734674319e900097695762b5b504430b461ba31ae9d4fc4da3f91b\": container with ID starting with 4fc2bb20a2734674319e900097695762b5b504430b461ba31ae9d4fc4da3f91b not found: ID does not exist" Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.463803 4756 scope.go:117] "RemoveContainer" containerID="9d24e22ea28bb66b53b42374453b421f11e85f6c74f4c7747aabbd7d82384562" Dec 03 11:30:11 crc kubenswrapper[4756]: E1203 11:30:11.464192 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d24e22ea28bb66b53b42374453b421f11e85f6c74f4c7747aabbd7d82384562\": container with ID starting with 9d24e22ea28bb66b53b42374453b421f11e85f6c74f4c7747aabbd7d82384562 not found: ID does not exist" containerID="9d24e22ea28bb66b53b42374453b421f11e85f6c74f4c7747aabbd7d82384562" Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.464257 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d24e22ea28bb66b53b42374453b421f11e85f6c74f4c7747aabbd7d82384562"} err="failed to get container status \"9d24e22ea28bb66b53b42374453b421f11e85f6c74f4c7747aabbd7d82384562\": rpc error: code = NotFound desc = could not find container \"9d24e22ea28bb66b53b42374453b421f11e85f6c74f4c7747aabbd7d82384562\": container with ID starting with 9d24e22ea28bb66b53b42374453b421f11e85f6c74f4c7747aabbd7d82384562 not found: ID does not exist" Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.464302 4756 scope.go:117] "RemoveContainer" containerID="97f7b44ecd8ce41195dc248d4c7d7d2d5a600ec031e40ce252673cc224020e53" Dec 03 11:30:11 crc kubenswrapper[4756]: E1203 11:30:11.464674 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97f7b44ecd8ce41195dc248d4c7d7d2d5a600ec031e40ce252673cc224020e53\": container with ID starting with 97f7b44ecd8ce41195dc248d4c7d7d2d5a600ec031e40ce252673cc224020e53 not found: ID does not exist" containerID="97f7b44ecd8ce41195dc248d4c7d7d2d5a600ec031e40ce252673cc224020e53" Dec 03 11:30:11 crc kubenswrapper[4756]: I1203 11:30:11.464704 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97f7b44ecd8ce41195dc248d4c7d7d2d5a600ec031e40ce252673cc224020e53"} err="failed to get container status \"97f7b44ecd8ce41195dc248d4c7d7d2d5a600ec031e40ce252673cc224020e53\": rpc error: code = NotFound desc = could not find container \"97f7b44ecd8ce41195dc248d4c7d7d2d5a600ec031e40ce252673cc224020e53\": container with ID starting with 97f7b44ecd8ce41195dc248d4c7d7d2d5a600ec031e40ce252673cc224020e53 not found: ID does not exist" Dec 03 11:30:13 crc kubenswrapper[4756]: I1203 11:30:13.247353 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e860160e-696f-4155-8122-4c65d046322e" path="/var/lib/kubelet/pods/e860160e-696f-4155-8122-4c65d046322e/volumes" Dec 03 11:30:22 crc kubenswrapper[4756]: I1203 11:30:22.607816 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:30:22 crc kubenswrapper[4756]: I1203 11:30:22.608821 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:30:49 crc kubenswrapper[4756]: E1203 11:30:49.712261 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42d479c1_c36c_4481_a4e1_7e155283859d.slice/crio-930d21c63a36210bc502bb4b7ff033b7dc300953ee8f215a633cb37a4c2b457d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42d479c1_c36c_4481_a4e1_7e155283859d.slice/crio-conmon-930d21c63a36210bc502bb4b7ff033b7dc300953ee8f215a633cb37a4c2b457d.scope\": RecentStats: unable to find data in memory cache]" Dec 03 11:30:49 crc kubenswrapper[4756]: I1203 11:30:49.743876 4756 generic.go:334] "Generic (PLEG): container finished" podID="42d479c1-c36c-4481-a4e1-7e155283859d" containerID="930d21c63a36210bc502bb4b7ff033b7dc300953ee8f215a633cb37a4c2b457d" exitCode=0 Dec 03 11:30:49 crc kubenswrapper[4756]: I1203 11:30:49.744094 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" event={"ID":"42d479c1-c36c-4481-a4e1-7e155283859d","Type":"ContainerDied","Data":"930d21c63a36210bc502bb4b7ff033b7dc300953ee8f215a633cb37a4c2b457d"} Dec 03 11:30:50 crc kubenswrapper[4756]: I1203 11:30:50.280813 4756 scope.go:117] "RemoveContainer" containerID="ef9c857ca5a9d3dd2f7c6ca6fa9cd5d612d9bfd31a00a6e81842ba22662ebe8e" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.197942 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.400355 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"42d479c1-c36c-4481-a4e1-7e155283859d\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.400409 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-libvirt-combined-ca-bundle\") pod \"42d479c1-c36c-4481-a4e1-7e155283859d\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.400444 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"42d479c1-c36c-4481-a4e1-7e155283859d\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.400468 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-telemetry-combined-ca-bundle\") pod \"42d479c1-c36c-4481-a4e1-7e155283859d\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.400509 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-ssh-key\") pod \"42d479c1-c36c-4481-a4e1-7e155283859d\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.400552 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-neutron-metadata-combined-ca-bundle\") pod \"42d479c1-c36c-4481-a4e1-7e155283859d\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.400575 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"42d479c1-c36c-4481-a4e1-7e155283859d\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.400612 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-bootstrap-combined-ca-bundle\") pod \"42d479c1-c36c-4481-a4e1-7e155283859d\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.400633 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tw7f\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-kube-api-access-6tw7f\") pod \"42d479c1-c36c-4481-a4e1-7e155283859d\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.400657 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-nova-combined-ca-bundle\") pod \"42d479c1-c36c-4481-a4e1-7e155283859d\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.400690 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-repo-setup-combined-ca-bundle\") pod \"42d479c1-c36c-4481-a4e1-7e155283859d\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.400717 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"42d479c1-c36c-4481-a4e1-7e155283859d\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.400753 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-ovn-combined-ca-bundle\") pod \"42d479c1-c36c-4481-a4e1-7e155283859d\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.400780 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-inventory\") pod \"42d479c1-c36c-4481-a4e1-7e155283859d\" (UID: \"42d479c1-c36c-4481-a4e1-7e155283859d\") " Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.409930 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-kube-api-access-6tw7f" (OuterVolumeSpecName: "kube-api-access-6tw7f") pod "42d479c1-c36c-4481-a4e1-7e155283859d" (UID: "42d479c1-c36c-4481-a4e1-7e155283859d"). InnerVolumeSpecName "kube-api-access-6tw7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.410243 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "42d479c1-c36c-4481-a4e1-7e155283859d" (UID: "42d479c1-c36c-4481-a4e1-7e155283859d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.412542 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "42d479c1-c36c-4481-a4e1-7e155283859d" (UID: "42d479c1-c36c-4481-a4e1-7e155283859d"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.412643 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "42d479c1-c36c-4481-a4e1-7e155283859d" (UID: "42d479c1-c36c-4481-a4e1-7e155283859d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.413145 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "42d479c1-c36c-4481-a4e1-7e155283859d" (UID: "42d479c1-c36c-4481-a4e1-7e155283859d"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.413715 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "42d479c1-c36c-4481-a4e1-7e155283859d" (UID: "42d479c1-c36c-4481-a4e1-7e155283859d"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.414054 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "42d479c1-c36c-4481-a4e1-7e155283859d" (UID: "42d479c1-c36c-4481-a4e1-7e155283859d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.414325 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "42d479c1-c36c-4481-a4e1-7e155283859d" (UID: "42d479c1-c36c-4481-a4e1-7e155283859d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.415703 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "42d479c1-c36c-4481-a4e1-7e155283859d" (UID: "42d479c1-c36c-4481-a4e1-7e155283859d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.416187 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "42d479c1-c36c-4481-a4e1-7e155283859d" (UID: "42d479c1-c36c-4481-a4e1-7e155283859d"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.422254 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "42d479c1-c36c-4481-a4e1-7e155283859d" (UID: "42d479c1-c36c-4481-a4e1-7e155283859d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.423196 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "42d479c1-c36c-4481-a4e1-7e155283859d" (UID: "42d479c1-c36c-4481-a4e1-7e155283859d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.439406 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-inventory" (OuterVolumeSpecName: "inventory") pod "42d479c1-c36c-4481-a4e1-7e155283859d" (UID: "42d479c1-c36c-4481-a4e1-7e155283859d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.441497 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "42d479c1-c36c-4481-a4e1-7e155283859d" (UID: "42d479c1-c36c-4481-a4e1-7e155283859d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.502272 4756 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.502318 4756 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.502331 4756 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.502345 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.502354 4756 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.502363 4756 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.502409 4756 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.502420 4756 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.502429 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.502437 4756 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.502445 4756 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.502454 4756 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.502463 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tw7f\" (UniqueName: \"kubernetes.io/projected/42d479c1-c36c-4481-a4e1-7e155283859d-kube-api-access-6tw7f\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.502495 4756 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d479c1-c36c-4481-a4e1-7e155283859d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.767226 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" event={"ID":"42d479c1-c36c-4481-a4e1-7e155283859d","Type":"ContainerDied","Data":"d59f3e5d89f7abcc0cf81b7a4badb7db9d800f16f91de25a4fd1bb3f925e34de"} Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.767287 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d59f3e5d89f7abcc0cf81b7a4badb7db9d800f16f91de25a4fd1bb3f925e34de" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.767635 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.948237 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q"] Dec 03 11:30:51 crc kubenswrapper[4756]: E1203 11:30:51.948746 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d479c1-c36c-4481-a4e1-7e155283859d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.948770 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d479c1-c36c-4481-a4e1-7e155283859d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 11:30:51 crc kubenswrapper[4756]: E1203 11:30:51.948809 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e860160e-696f-4155-8122-4c65d046322e" containerName="extract-utilities" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.948819 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e860160e-696f-4155-8122-4c65d046322e" containerName="extract-utilities" Dec 03 11:30:51 crc kubenswrapper[4756]: E1203 11:30:51.948833 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e860160e-696f-4155-8122-4c65d046322e" containerName="extract-content" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.948841 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e860160e-696f-4155-8122-4c65d046322e" containerName="extract-content" Dec 03 11:30:51 crc kubenswrapper[4756]: E1203 11:30:51.948870 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e860160e-696f-4155-8122-4c65d046322e" containerName="registry-server" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.948878 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e860160e-696f-4155-8122-4c65d046322e" containerName="registry-server" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.949202 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e860160e-696f-4155-8122-4c65d046322e" containerName="registry-server" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.949224 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d479c1-c36c-4481-a4e1-7e155283859d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.950629 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.954769 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.954844 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.955121 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.955258 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.955326 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:30:51 crc kubenswrapper[4756]: I1203 11:30:51.961453 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q"] Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.115513 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxdvj\" (UniqueName: \"kubernetes.io/projected/33ee231c-20f0-429c-92a2-7001e843e8b3-kube-api-access-bxdvj\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.115931 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/33ee231c-20f0-429c-92a2-7001e843e8b3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.116019 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.116128 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.116399 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.218869 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.219044 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxdvj\" (UniqueName: \"kubernetes.io/projected/33ee231c-20f0-429c-92a2-7001e843e8b3-kube-api-access-bxdvj\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.219133 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/33ee231c-20f0-429c-92a2-7001e843e8b3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.219185 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.219342 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.221528 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/33ee231c-20f0-429c-92a2-7001e843e8b3-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.223731 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.223772 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.224667 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.236839 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxdvj\" (UniqueName: \"kubernetes.io/projected/33ee231c-20f0-429c-92a2-7001e843e8b3-kube-api-access-bxdvj\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zrq6q\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.278601 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.607561 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.608095 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:30:52 crc kubenswrapper[4756]: I1203 11:30:52.849405 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q"] Dec 03 11:30:53 crc kubenswrapper[4756]: I1203 11:30:53.788331 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" event={"ID":"33ee231c-20f0-429c-92a2-7001e843e8b3","Type":"ContainerStarted","Data":"f7afdd5b9872e31b3e534bee27ed36d5058f043d5e05d0a23144ed47fa384b6d"} Dec 03 11:30:53 crc kubenswrapper[4756]: I1203 11:30:53.789828 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" event={"ID":"33ee231c-20f0-429c-92a2-7001e843e8b3","Type":"ContainerStarted","Data":"a2c2b50d1e4674e1c25b744b0aa336173e813e7e51e490f639276556cf5ac62a"} Dec 03 11:30:53 crc kubenswrapper[4756]: I1203 11:30:53.818296 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" podStartSLOduration=2.462724508 podStartE2EDuration="2.818267167s" podCreationTimestamp="2025-12-03 11:30:51 +0000 UTC" firstStartedPulling="2025-12-03 11:30:52.847523358 +0000 UTC m=+2263.877524622" lastFinishedPulling="2025-12-03 11:30:53.203066037 +0000 UTC m=+2264.233067281" observedRunningTime="2025-12-03 11:30:53.805255979 +0000 UTC m=+2264.835257223" watchObservedRunningTime="2025-12-03 11:30:53.818267167 +0000 UTC m=+2264.848268431" Dec 03 11:31:22 crc kubenswrapper[4756]: I1203 11:31:22.607031 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:31:22 crc kubenswrapper[4756]: I1203 11:31:22.608090 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:31:22 crc kubenswrapper[4756]: I1203 11:31:22.608171 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 11:31:22 crc kubenswrapper[4756]: I1203 11:31:22.609251 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 11:31:22 crc kubenswrapper[4756]: I1203 11:31:22.609317 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" gracePeriod=600 Dec 03 11:31:22 crc kubenswrapper[4756]: E1203 11:31:22.740043 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:31:23 crc kubenswrapper[4756]: I1203 11:31:23.095630 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" exitCode=0 Dec 03 11:31:23 crc kubenswrapper[4756]: I1203 11:31:23.095697 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df"} Dec 03 11:31:23 crc kubenswrapper[4756]: I1203 11:31:23.095749 4756 scope.go:117] "RemoveContainer" containerID="bee67564a166c569d665ca8e6192a5b98d0e4116e06012f6ecd98a3f28ada620" Dec 03 11:31:23 crc kubenswrapper[4756]: I1203 11:31:23.096659 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:31:23 crc kubenswrapper[4756]: E1203 11:31:23.096995 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:31:37 crc kubenswrapper[4756]: I1203 11:31:37.234228 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:31:37 crc kubenswrapper[4756]: E1203 11:31:37.235209 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:31:50 crc kubenswrapper[4756]: I1203 11:31:50.233456 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:31:50 crc kubenswrapper[4756]: E1203 11:31:50.234386 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:32:03 crc kubenswrapper[4756]: I1203 11:32:03.234137 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:32:03 crc kubenswrapper[4756]: E1203 11:32:03.236236 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:32:05 crc kubenswrapper[4756]: I1203 11:32:05.500506 4756 generic.go:334] "Generic (PLEG): container finished" podID="33ee231c-20f0-429c-92a2-7001e843e8b3" containerID="f7afdd5b9872e31b3e534bee27ed36d5058f043d5e05d0a23144ed47fa384b6d" exitCode=0 Dec 03 11:32:05 crc kubenswrapper[4756]: I1203 11:32:05.500588 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" event={"ID":"33ee231c-20f0-429c-92a2-7001e843e8b3","Type":"ContainerDied","Data":"f7afdd5b9872e31b3e534bee27ed36d5058f043d5e05d0a23144ed47fa384b6d"} Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.077010 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.177779 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/33ee231c-20f0-429c-92a2-7001e843e8b3-ovncontroller-config-0\") pod \"33ee231c-20f0-429c-92a2-7001e843e8b3\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.178230 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxdvj\" (UniqueName: \"kubernetes.io/projected/33ee231c-20f0-429c-92a2-7001e843e8b3-kube-api-access-bxdvj\") pod \"33ee231c-20f0-429c-92a2-7001e843e8b3\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.178319 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-ovn-combined-ca-bundle\") pod \"33ee231c-20f0-429c-92a2-7001e843e8b3\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.178413 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-inventory\") pod \"33ee231c-20f0-429c-92a2-7001e843e8b3\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.178610 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-ssh-key\") pod \"33ee231c-20f0-429c-92a2-7001e843e8b3\" (UID: \"33ee231c-20f0-429c-92a2-7001e843e8b3\") " Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.185196 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ee231c-20f0-429c-92a2-7001e843e8b3-kube-api-access-bxdvj" (OuterVolumeSpecName: "kube-api-access-bxdvj") pod "33ee231c-20f0-429c-92a2-7001e843e8b3" (UID: "33ee231c-20f0-429c-92a2-7001e843e8b3"). InnerVolumeSpecName "kube-api-access-bxdvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.194260 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "33ee231c-20f0-429c-92a2-7001e843e8b3" (UID: "33ee231c-20f0-429c-92a2-7001e843e8b3"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.206716 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33ee231c-20f0-429c-92a2-7001e843e8b3-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "33ee231c-20f0-429c-92a2-7001e843e8b3" (UID: "33ee231c-20f0-429c-92a2-7001e843e8b3"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.216619 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-inventory" (OuterVolumeSpecName: "inventory") pod "33ee231c-20f0-429c-92a2-7001e843e8b3" (UID: "33ee231c-20f0-429c-92a2-7001e843e8b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.219838 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "33ee231c-20f0-429c-92a2-7001e843e8b3" (UID: "33ee231c-20f0-429c-92a2-7001e843e8b3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.281485 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.281520 4756 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/33ee231c-20f0-429c-92a2-7001e843e8b3-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.281534 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxdvj\" (UniqueName: \"kubernetes.io/projected/33ee231c-20f0-429c-92a2-7001e843e8b3-kube-api-access-bxdvj\") on node \"crc\" DevicePath \"\"" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.281544 4756 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.281554 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33ee231c-20f0-429c-92a2-7001e843e8b3-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.528764 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" event={"ID":"33ee231c-20f0-429c-92a2-7001e843e8b3","Type":"ContainerDied","Data":"a2c2b50d1e4674e1c25b744b0aa336173e813e7e51e490f639276556cf5ac62a"} Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.528815 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2c2b50d1e4674e1c25b744b0aa336173e813e7e51e490f639276556cf5ac62a" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.528865 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zrq6q" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.650433 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7"] Dec 03 11:32:07 crc kubenswrapper[4756]: E1203 11:32:07.651328 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ee231c-20f0-429c-92a2-7001e843e8b3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.651355 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ee231c-20f0-429c-92a2-7001e843e8b3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.651626 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="33ee231c-20f0-429c-92a2-7001e843e8b3" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.652729 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.658697 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.661222 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.661338 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.661447 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.661808 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.667829 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.673249 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7"] Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.699638 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.699698 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmjwt\" (UniqueName: \"kubernetes.io/projected/e51da37d-c82b-43d5-b61c-5199ca9321d2-kube-api-access-mmjwt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.699773 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.699833 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.700001 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.700024 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.801878 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.801981 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.802068 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.802095 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.802123 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.802147 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmjwt\" (UniqueName: \"kubernetes.io/projected/e51da37d-c82b-43d5-b61c-5199ca9321d2-kube-api-access-mmjwt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.807219 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.808554 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.808729 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.811033 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.812751 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.830886 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmjwt\" (UniqueName: \"kubernetes.io/projected/e51da37d-c82b-43d5-b61c-5199ca9321d2-kube-api-access-mmjwt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:07 crc kubenswrapper[4756]: I1203 11:32:07.984114 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:32:08 crc kubenswrapper[4756]: I1203 11:32:08.547644 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7"] Dec 03 11:32:09 crc kubenswrapper[4756]: I1203 11:32:09.549259 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" event={"ID":"e51da37d-c82b-43d5-b61c-5199ca9321d2","Type":"ContainerStarted","Data":"5e12504845f454540cded216cd4e4ccaab487da81381cc7920aa2af752645916"} Dec 03 11:32:09 crc kubenswrapper[4756]: I1203 11:32:09.550407 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" event={"ID":"e51da37d-c82b-43d5-b61c-5199ca9321d2","Type":"ContainerStarted","Data":"db63201bd2195e83c4e1c70ff97bed73444e00f216b9fb734d18109749922f7d"} Dec 03 11:32:09 crc kubenswrapper[4756]: I1203 11:32:09.576596 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" podStartSLOduration=2.106384223 podStartE2EDuration="2.576425897s" podCreationTimestamp="2025-12-03 11:32:07 +0000 UTC" firstStartedPulling="2025-12-03 11:32:08.555386798 +0000 UTC m=+2339.585388042" lastFinishedPulling="2025-12-03 11:32:09.025428472 +0000 UTC m=+2340.055429716" observedRunningTime="2025-12-03 11:32:09.569638414 +0000 UTC m=+2340.599639658" watchObservedRunningTime="2025-12-03 11:32:09.576425897 +0000 UTC m=+2340.606427141" Dec 03 11:32:14 crc kubenswrapper[4756]: I1203 11:32:14.234219 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:32:14 crc kubenswrapper[4756]: E1203 11:32:14.235837 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:32:26 crc kubenswrapper[4756]: I1203 11:32:26.586819 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5kkmb"] Dec 03 11:32:26 crc kubenswrapper[4756]: I1203 11:32:26.590968 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:26 crc kubenswrapper[4756]: I1203 11:32:26.629621 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5kkmb"] Dec 03 11:32:26 crc kubenswrapper[4756]: I1203 11:32:26.681324 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-utilities\") pod \"certified-operators-5kkmb\" (UID: \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\") " pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:26 crc kubenswrapper[4756]: I1203 11:32:26.681385 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-catalog-content\") pod \"certified-operators-5kkmb\" (UID: \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\") " pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:26 crc kubenswrapper[4756]: I1203 11:32:26.681457 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hfvx\" (UniqueName: \"kubernetes.io/projected/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-kube-api-access-4hfvx\") pod \"certified-operators-5kkmb\" (UID: \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\") " pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:26 crc kubenswrapper[4756]: I1203 11:32:26.783919 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-utilities\") pod \"certified-operators-5kkmb\" (UID: \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\") " pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:26 crc kubenswrapper[4756]: I1203 11:32:26.784024 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-catalog-content\") pod \"certified-operators-5kkmb\" (UID: \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\") " pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:26 crc kubenswrapper[4756]: I1203 11:32:26.784122 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hfvx\" (UniqueName: \"kubernetes.io/projected/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-kube-api-access-4hfvx\") pod \"certified-operators-5kkmb\" (UID: \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\") " pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:26 crc kubenswrapper[4756]: I1203 11:32:26.784684 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-utilities\") pod \"certified-operators-5kkmb\" (UID: \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\") " pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:26 crc kubenswrapper[4756]: I1203 11:32:26.784877 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-catalog-content\") pod \"certified-operators-5kkmb\" (UID: \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\") " pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:26 crc kubenswrapper[4756]: I1203 11:32:26.810324 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hfvx\" (UniqueName: \"kubernetes.io/projected/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-kube-api-access-4hfvx\") pod \"certified-operators-5kkmb\" (UID: \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\") " pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:26 crc kubenswrapper[4756]: I1203 11:32:26.929901 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:27 crc kubenswrapper[4756]: I1203 11:32:27.517296 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5kkmb"] Dec 03 11:32:27 crc kubenswrapper[4756]: I1203 11:32:27.749827 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kkmb" event={"ID":"0b42905a-1f59-4f95-9b85-cdd0f0bf263c","Type":"ContainerStarted","Data":"54085000256ea07d048a9331658465be720398581cb02adc5afc77af03e1cdb2"} Dec 03 11:32:27 crc kubenswrapper[4756]: I1203 11:32:27.750326 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kkmb" event={"ID":"0b42905a-1f59-4f95-9b85-cdd0f0bf263c","Type":"ContainerStarted","Data":"be3226b324e1a33debdd72fc728b9db1bfc3314a8f1b055fd8fd18edffdef6cb"} Dec 03 11:32:27 crc kubenswrapper[4756]: I1203 11:32:27.752437 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 11:32:28 crc kubenswrapper[4756]: I1203 11:32:28.235425 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:32:28 crc kubenswrapper[4756]: E1203 11:32:28.235921 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:32:28 crc kubenswrapper[4756]: I1203 11:32:28.760218 4756 generic.go:334] "Generic (PLEG): container finished" podID="0b42905a-1f59-4f95-9b85-cdd0f0bf263c" containerID="54085000256ea07d048a9331658465be720398581cb02adc5afc77af03e1cdb2" exitCode=0 Dec 03 11:32:28 crc kubenswrapper[4756]: I1203 11:32:28.760298 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kkmb" event={"ID":"0b42905a-1f59-4f95-9b85-cdd0f0bf263c","Type":"ContainerDied","Data":"54085000256ea07d048a9331658465be720398581cb02adc5afc77af03e1cdb2"} Dec 03 11:32:29 crc kubenswrapper[4756]: I1203 11:32:29.772557 4756 generic.go:334] "Generic (PLEG): container finished" podID="0b42905a-1f59-4f95-9b85-cdd0f0bf263c" containerID="5f43563fb481de024d1d2c4e8c92349374c6067b040c80875b5e24911e4ea139" exitCode=0 Dec 03 11:32:29 crc kubenswrapper[4756]: I1203 11:32:29.772603 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kkmb" event={"ID":"0b42905a-1f59-4f95-9b85-cdd0f0bf263c","Type":"ContainerDied","Data":"5f43563fb481de024d1d2c4e8c92349374c6067b040c80875b5e24911e4ea139"} Dec 03 11:32:30 crc kubenswrapper[4756]: I1203 11:32:30.788124 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kkmb" event={"ID":"0b42905a-1f59-4f95-9b85-cdd0f0bf263c","Type":"ContainerStarted","Data":"6519cdb5caf3ebbe096399aedca48d408ca5fc2d5343279e29d56c15dc1bc3c6"} Dec 03 11:32:30 crc kubenswrapper[4756]: I1203 11:32:30.816464 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5kkmb" podStartSLOduration=2.358915543 podStartE2EDuration="4.816445089s" podCreationTimestamp="2025-12-03 11:32:26 +0000 UTC" firstStartedPulling="2025-12-03 11:32:27.752201769 +0000 UTC m=+2358.782203003" lastFinishedPulling="2025-12-03 11:32:30.209731305 +0000 UTC m=+2361.239732549" observedRunningTime="2025-12-03 11:32:30.813440735 +0000 UTC m=+2361.843441979" watchObservedRunningTime="2025-12-03 11:32:30.816445089 +0000 UTC m=+2361.846446333" Dec 03 11:32:36 crc kubenswrapper[4756]: I1203 11:32:36.930798 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:36 crc kubenswrapper[4756]: I1203 11:32:36.932099 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:36 crc kubenswrapper[4756]: I1203 11:32:36.992871 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:37 crc kubenswrapper[4756]: I1203 11:32:37.928507 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:37 crc kubenswrapper[4756]: I1203 11:32:37.991674 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5kkmb"] Dec 03 11:32:39 crc kubenswrapper[4756]: I1203 11:32:39.883385 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5kkmb" podUID="0b42905a-1f59-4f95-9b85-cdd0f0bf263c" containerName="registry-server" containerID="cri-o://6519cdb5caf3ebbe096399aedca48d408ca5fc2d5343279e29d56c15dc1bc3c6" gracePeriod=2 Dec 03 11:32:40 crc kubenswrapper[4756]: I1203 11:32:40.234055 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:32:40 crc kubenswrapper[4756]: E1203 11:32:40.234649 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:32:40 crc kubenswrapper[4756]: I1203 11:32:40.898107 4756 generic.go:334] "Generic (PLEG): container finished" podID="0b42905a-1f59-4f95-9b85-cdd0f0bf263c" containerID="6519cdb5caf3ebbe096399aedca48d408ca5fc2d5343279e29d56c15dc1bc3c6" exitCode=0 Dec 03 11:32:40 crc kubenswrapper[4756]: I1203 11:32:40.898186 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kkmb" event={"ID":"0b42905a-1f59-4f95-9b85-cdd0f0bf263c","Type":"ContainerDied","Data":"6519cdb5caf3ebbe096399aedca48d408ca5fc2d5343279e29d56c15dc1bc3c6"} Dec 03 11:32:40 crc kubenswrapper[4756]: I1203 11:32:40.899622 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kkmb" event={"ID":"0b42905a-1f59-4f95-9b85-cdd0f0bf263c","Type":"ContainerDied","Data":"be3226b324e1a33debdd72fc728b9db1bfc3314a8f1b055fd8fd18edffdef6cb"} Dec 03 11:32:40 crc kubenswrapper[4756]: I1203 11:32:40.899652 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be3226b324e1a33debdd72fc728b9db1bfc3314a8f1b055fd8fd18edffdef6cb" Dec 03 11:32:40 crc kubenswrapper[4756]: I1203 11:32:40.931748 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:41 crc kubenswrapper[4756]: I1203 11:32:41.058261 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-catalog-content\") pod \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\" (UID: \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\") " Dec 03 11:32:41 crc kubenswrapper[4756]: I1203 11:32:41.058558 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hfvx\" (UniqueName: \"kubernetes.io/projected/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-kube-api-access-4hfvx\") pod \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\" (UID: \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\") " Dec 03 11:32:41 crc kubenswrapper[4756]: I1203 11:32:41.058604 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-utilities\") pod \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\" (UID: \"0b42905a-1f59-4f95-9b85-cdd0f0bf263c\") " Dec 03 11:32:41 crc kubenswrapper[4756]: I1203 11:32:41.060339 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-utilities" (OuterVolumeSpecName: "utilities") pod "0b42905a-1f59-4f95-9b85-cdd0f0bf263c" (UID: "0b42905a-1f59-4f95-9b85-cdd0f0bf263c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:32:41 crc kubenswrapper[4756]: I1203 11:32:41.080121 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-kube-api-access-4hfvx" (OuterVolumeSpecName: "kube-api-access-4hfvx") pod "0b42905a-1f59-4f95-9b85-cdd0f0bf263c" (UID: "0b42905a-1f59-4f95-9b85-cdd0f0bf263c"). InnerVolumeSpecName "kube-api-access-4hfvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:32:41 crc kubenswrapper[4756]: I1203 11:32:41.121119 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b42905a-1f59-4f95-9b85-cdd0f0bf263c" (UID: "0b42905a-1f59-4f95-9b85-cdd0f0bf263c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:32:41 crc kubenswrapper[4756]: I1203 11:32:41.162238 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hfvx\" (UniqueName: \"kubernetes.io/projected/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-kube-api-access-4hfvx\") on node \"crc\" DevicePath \"\"" Dec 03 11:32:41 crc kubenswrapper[4756]: I1203 11:32:41.162288 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:32:41 crc kubenswrapper[4756]: I1203 11:32:41.162299 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b42905a-1f59-4f95-9b85-cdd0f0bf263c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:32:41 crc kubenswrapper[4756]: I1203 11:32:41.909612 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kkmb" Dec 03 11:32:41 crc kubenswrapper[4756]: I1203 11:32:41.941455 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5kkmb"] Dec 03 11:32:41 crc kubenswrapper[4756]: I1203 11:32:41.963070 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5kkmb"] Dec 03 11:32:43 crc kubenswrapper[4756]: I1203 11:32:43.245796 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b42905a-1f59-4f95-9b85-cdd0f0bf263c" path="/var/lib/kubelet/pods/0b42905a-1f59-4f95-9b85-cdd0f0bf263c/volumes" Dec 03 11:32:52 crc kubenswrapper[4756]: I1203 11:32:52.235876 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:32:52 crc kubenswrapper[4756]: E1203 11:32:52.240407 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:33:02 crc kubenswrapper[4756]: I1203 11:33:02.136452 4756 generic.go:334] "Generic (PLEG): container finished" podID="e51da37d-c82b-43d5-b61c-5199ca9321d2" containerID="5e12504845f454540cded216cd4e4ccaab487da81381cc7920aa2af752645916" exitCode=0 Dec 03 11:33:02 crc kubenswrapper[4756]: I1203 11:33:02.136516 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" event={"ID":"e51da37d-c82b-43d5-b61c-5199ca9321d2","Type":"ContainerDied","Data":"5e12504845f454540cded216cd4e4ccaab487da81381cc7920aa2af752645916"} Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.612677 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.726787 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-neutron-metadata-combined-ca-bundle\") pod \"e51da37d-c82b-43d5-b61c-5199ca9321d2\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.726834 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e51da37d-c82b-43d5-b61c-5199ca9321d2\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.726914 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-inventory\") pod \"e51da37d-c82b-43d5-b61c-5199ca9321d2\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.727098 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmjwt\" (UniqueName: \"kubernetes.io/projected/e51da37d-c82b-43d5-b61c-5199ca9321d2-kube-api-access-mmjwt\") pod \"e51da37d-c82b-43d5-b61c-5199ca9321d2\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.727140 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-ssh-key\") pod \"e51da37d-c82b-43d5-b61c-5199ca9321d2\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.727201 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-nova-metadata-neutron-config-0\") pod \"e51da37d-c82b-43d5-b61c-5199ca9321d2\" (UID: \"e51da37d-c82b-43d5-b61c-5199ca9321d2\") " Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.733992 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e51da37d-c82b-43d5-b61c-5199ca9321d2" (UID: "e51da37d-c82b-43d5-b61c-5199ca9321d2"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.759121 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e51da37d-c82b-43d5-b61c-5199ca9321d2-kube-api-access-mmjwt" (OuterVolumeSpecName: "kube-api-access-mmjwt") pod "e51da37d-c82b-43d5-b61c-5199ca9321d2" (UID: "e51da37d-c82b-43d5-b61c-5199ca9321d2"). InnerVolumeSpecName "kube-api-access-mmjwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.763456 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-inventory" (OuterVolumeSpecName: "inventory") pod "e51da37d-c82b-43d5-b61c-5199ca9321d2" (UID: "e51da37d-c82b-43d5-b61c-5199ca9321d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.763995 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e51da37d-c82b-43d5-b61c-5199ca9321d2" (UID: "e51da37d-c82b-43d5-b61c-5199ca9321d2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.765848 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e51da37d-c82b-43d5-b61c-5199ca9321d2" (UID: "e51da37d-c82b-43d5-b61c-5199ca9321d2"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.766752 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e51da37d-c82b-43d5-b61c-5199ca9321d2" (UID: "e51da37d-c82b-43d5-b61c-5199ca9321d2"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.830500 4756 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.830552 4756 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.830569 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.830582 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmjwt\" (UniqueName: \"kubernetes.io/projected/e51da37d-c82b-43d5-b61c-5199ca9321d2-kube-api-access-mmjwt\") on node \"crc\" DevicePath \"\"" Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.830596 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:33:03 crc kubenswrapper[4756]: I1203 11:33:03.830611 4756 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e51da37d-c82b-43d5-b61c-5199ca9321d2-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.158402 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" event={"ID":"e51da37d-c82b-43d5-b61c-5199ca9321d2","Type":"ContainerDied","Data":"db63201bd2195e83c4e1c70ff97bed73444e00f216b9fb734d18109749922f7d"} Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.158449 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db63201bd2195e83c4e1c70ff97bed73444e00f216b9fb734d18109749922f7d" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.158519 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.350832 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5"] Dec 03 11:33:04 crc kubenswrapper[4756]: E1203 11:33:04.353686 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b42905a-1f59-4f95-9b85-cdd0f0bf263c" containerName="extract-utilities" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.353734 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b42905a-1f59-4f95-9b85-cdd0f0bf263c" containerName="extract-utilities" Dec 03 11:33:04 crc kubenswrapper[4756]: E1203 11:33:04.353772 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51da37d-c82b-43d5-b61c-5199ca9321d2" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.353784 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51da37d-c82b-43d5-b61c-5199ca9321d2" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 11:33:04 crc kubenswrapper[4756]: E1203 11:33:04.353848 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b42905a-1f59-4f95-9b85-cdd0f0bf263c" containerName="registry-server" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.353856 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b42905a-1f59-4f95-9b85-cdd0f0bf263c" containerName="registry-server" Dec 03 11:33:04 crc kubenswrapper[4756]: E1203 11:33:04.353884 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b42905a-1f59-4f95-9b85-cdd0f0bf263c" containerName="extract-content" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.353891 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b42905a-1f59-4f95-9b85-cdd0f0bf263c" containerName="extract-content" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.354922 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e51da37d-c82b-43d5-b61c-5199ca9321d2" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.354969 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b42905a-1f59-4f95-9b85-cdd0f0bf263c" containerName="registry-server" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.358456 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.368896 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.369271 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.369651 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.370965 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.371426 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.380299 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5"] Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.441790 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.441842 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.441874 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.442475 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x9bh\" (UniqueName: \"kubernetes.io/projected/23845452-bd2b-4841-b275-0adbc22178c1-kube-api-access-7x9bh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.442839 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.545110 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.545174 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.545200 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.545230 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.545317 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x9bh\" (UniqueName: \"kubernetes.io/projected/23845452-bd2b-4841-b275-0adbc22178c1-kube-api-access-7x9bh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.550798 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.550892 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.551361 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.554676 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.567201 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x9bh\" (UniqueName: \"kubernetes.io/projected/23845452-bd2b-4841-b275-0adbc22178c1-kube-api-access-7x9bh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:04 crc kubenswrapper[4756]: I1203 11:33:04.690548 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:33:05 crc kubenswrapper[4756]: I1203 11:33:05.251610 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5"] Dec 03 11:33:06 crc kubenswrapper[4756]: I1203 11:33:06.175748 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" event={"ID":"23845452-bd2b-4841-b275-0adbc22178c1","Type":"ContainerStarted","Data":"e31c21173d4ec2033ea464e9492ffb8f0d909b15f616ea1ae187127369871ef2"} Dec 03 11:33:07 crc kubenswrapper[4756]: I1203 11:33:07.188427 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" event={"ID":"23845452-bd2b-4841-b275-0adbc22178c1","Type":"ContainerStarted","Data":"528f3b429af30fa1077ae97f597fe162f72a5c39a38deab19d8007ade72a5a08"} Dec 03 11:33:07 crc kubenswrapper[4756]: I1203 11:33:07.208873 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" podStartSLOduration=2.518459928 podStartE2EDuration="3.208845201s" podCreationTimestamp="2025-12-03 11:33:04 +0000 UTC" firstStartedPulling="2025-12-03 11:33:05.234679186 +0000 UTC m=+2396.264680430" lastFinishedPulling="2025-12-03 11:33:05.925064459 +0000 UTC m=+2396.955065703" observedRunningTime="2025-12-03 11:33:07.205364373 +0000 UTC m=+2398.235365627" watchObservedRunningTime="2025-12-03 11:33:07.208845201 +0000 UTC m=+2398.238846445" Dec 03 11:33:07 crc kubenswrapper[4756]: I1203 11:33:07.233926 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:33:07 crc kubenswrapper[4756]: E1203 11:33:07.234350 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:33:18 crc kubenswrapper[4756]: I1203 11:33:18.233490 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:33:18 crc kubenswrapper[4756]: E1203 11:33:18.234476 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:33:29 crc kubenswrapper[4756]: I1203 11:33:29.235841 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:33:29 crc kubenswrapper[4756]: E1203 11:33:29.236911 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:33:44 crc kubenswrapper[4756]: I1203 11:33:44.234410 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:33:44 crc kubenswrapper[4756]: E1203 11:33:44.235714 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:33:55 crc kubenswrapper[4756]: I1203 11:33:55.237175 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:33:55 crc kubenswrapper[4756]: E1203 11:33:55.238481 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:34:07 crc kubenswrapper[4756]: I1203 11:34:07.233532 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:34:07 crc kubenswrapper[4756]: E1203 11:34:07.234617 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:34:19 crc kubenswrapper[4756]: I1203 11:34:19.242760 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:34:19 crc kubenswrapper[4756]: E1203 11:34:19.244146 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:34:31 crc kubenswrapper[4756]: I1203 11:34:31.235398 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:34:31 crc kubenswrapper[4756]: E1203 11:34:31.236488 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:34:43 crc kubenswrapper[4756]: I1203 11:34:43.234738 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:34:43 crc kubenswrapper[4756]: E1203 11:34:43.237039 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:34:54 crc kubenswrapper[4756]: I1203 11:34:54.234544 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:34:54 crc kubenswrapper[4756]: E1203 11:34:54.235424 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:35:08 crc kubenswrapper[4756]: I1203 11:35:08.235114 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:35:08 crc kubenswrapper[4756]: E1203 11:35:08.236116 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:35:23 crc kubenswrapper[4756]: I1203 11:35:23.235005 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:35:23 crc kubenswrapper[4756]: E1203 11:35:23.236480 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:35:35 crc kubenswrapper[4756]: I1203 11:35:35.235371 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:35:35 crc kubenswrapper[4756]: E1203 11:35:35.236527 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:35:50 crc kubenswrapper[4756]: I1203 11:35:50.236427 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:35:50 crc kubenswrapper[4756]: E1203 11:35:50.237446 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:36:05 crc kubenswrapper[4756]: I1203 11:36:05.234202 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:36:05 crc kubenswrapper[4756]: E1203 11:36:05.235173 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:36:20 crc kubenswrapper[4756]: I1203 11:36:20.235079 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:36:20 crc kubenswrapper[4756]: E1203 11:36:20.236503 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:36:33 crc kubenswrapper[4756]: I1203 11:36:33.234379 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:36:34 crc kubenswrapper[4756]: I1203 11:36:34.746567 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"f2d2339d48304c122e5c986eb5f41e55bfa042f513c718620aa2da2e3c4adb10"} Dec 03 11:37:40 crc kubenswrapper[4756]: E1203 11:37:40.858598 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23845452_bd2b_4841_b275_0adbc22178c1.slice/crio-conmon-528f3b429af30fa1077ae97f597fe162f72a5c39a38deab19d8007ade72a5a08.scope\": RecentStats: unable to find data in memory cache]" Dec 03 11:37:41 crc kubenswrapper[4756]: I1203 11:37:41.569156 4756 generic.go:334] "Generic (PLEG): container finished" podID="23845452-bd2b-4841-b275-0adbc22178c1" containerID="528f3b429af30fa1077ae97f597fe162f72a5c39a38deab19d8007ade72a5a08" exitCode=0 Dec 03 11:37:41 crc kubenswrapper[4756]: I1203 11:37:41.569215 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" event={"ID":"23845452-bd2b-4841-b275-0adbc22178c1","Type":"ContainerDied","Data":"528f3b429af30fa1077ae97f597fe162f72a5c39a38deab19d8007ade72a5a08"} Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.145309 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.296723 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-libvirt-secret-0\") pod \"23845452-bd2b-4841-b275-0adbc22178c1\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.296902 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-ssh-key\") pod \"23845452-bd2b-4841-b275-0adbc22178c1\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.297167 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-inventory\") pod \"23845452-bd2b-4841-b275-0adbc22178c1\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.297286 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x9bh\" (UniqueName: \"kubernetes.io/projected/23845452-bd2b-4841-b275-0adbc22178c1-kube-api-access-7x9bh\") pod \"23845452-bd2b-4841-b275-0adbc22178c1\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.297449 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-libvirt-combined-ca-bundle\") pod \"23845452-bd2b-4841-b275-0adbc22178c1\" (UID: \"23845452-bd2b-4841-b275-0adbc22178c1\") " Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.305180 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23845452-bd2b-4841-b275-0adbc22178c1-kube-api-access-7x9bh" (OuterVolumeSpecName: "kube-api-access-7x9bh") pod "23845452-bd2b-4841-b275-0adbc22178c1" (UID: "23845452-bd2b-4841-b275-0adbc22178c1"). InnerVolumeSpecName "kube-api-access-7x9bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.305525 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "23845452-bd2b-4841-b275-0adbc22178c1" (UID: "23845452-bd2b-4841-b275-0adbc22178c1"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.329780 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-inventory" (OuterVolumeSpecName: "inventory") pod "23845452-bd2b-4841-b275-0adbc22178c1" (UID: "23845452-bd2b-4841-b275-0adbc22178c1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.333034 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "23845452-bd2b-4841-b275-0adbc22178c1" (UID: "23845452-bd2b-4841-b275-0adbc22178c1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.335205 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "23845452-bd2b-4841-b275-0adbc22178c1" (UID: "23845452-bd2b-4841-b275-0adbc22178c1"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.400057 4756 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.400094 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.400105 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.400115 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x9bh\" (UniqueName: \"kubernetes.io/projected/23845452-bd2b-4841-b275-0adbc22178c1-kube-api-access-7x9bh\") on node \"crc\" DevicePath \"\"" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.400128 4756 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23845452-bd2b-4841-b275-0adbc22178c1-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.596109 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" event={"ID":"23845452-bd2b-4841-b275-0adbc22178c1","Type":"ContainerDied","Data":"e31c21173d4ec2033ea464e9492ffb8f0d909b15f616ea1ae187127369871ef2"} Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.596645 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e31c21173d4ec2033ea464e9492ffb8f0d909b15f616ea1ae187127369871ef2" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.596226 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.723636 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6"] Dec 03 11:37:43 crc kubenswrapper[4756]: E1203 11:37:43.724209 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23845452-bd2b-4841-b275-0adbc22178c1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.724241 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="23845452-bd2b-4841-b275-0adbc22178c1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.724537 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="23845452-bd2b-4841-b275-0adbc22178c1" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.726391 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.731144 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.731200 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.731485 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.731683 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.731837 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.732066 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.732265 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.751841 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6"] Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.822670 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.822978 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.823167 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.823368 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.823647 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.823684 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7f5w\" (UniqueName: \"kubernetes.io/projected/edbbebac-e053-45ad-9b17-83d0e55fba86-kube-api-access-n7f5w\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.823757 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.823803 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.823866 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.926506 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.926595 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.926635 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7f5w\" (UniqueName: \"kubernetes.io/projected/edbbebac-e053-45ad-9b17-83d0e55fba86-kube-api-access-n7f5w\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.926691 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.926728 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.926779 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.926842 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.926876 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.926928 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.928332 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.931805 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.932243 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.932972 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.933380 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.934386 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.934731 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.940705 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:43 crc kubenswrapper[4756]: I1203 11:37:43.955861 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7f5w\" (UniqueName: \"kubernetes.io/projected/edbbebac-e053-45ad-9b17-83d0e55fba86-kube-api-access-n7f5w\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jv7s6\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:44 crc kubenswrapper[4756]: I1203 11:37:44.056774 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:37:44 crc kubenswrapper[4756]: I1203 11:37:44.812528 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6"] Dec 03 11:37:44 crc kubenswrapper[4756]: I1203 11:37:44.823468 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 11:37:45 crc kubenswrapper[4756]: I1203 11:37:45.628620 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" event={"ID":"edbbebac-e053-45ad-9b17-83d0e55fba86","Type":"ContainerStarted","Data":"984ed19311e2e3d7f91e3e511e808a1de1f3ad9bf847939806059b3604887d6f"} Dec 03 11:37:46 crc kubenswrapper[4756]: I1203 11:37:46.658685 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" event={"ID":"edbbebac-e053-45ad-9b17-83d0e55fba86","Type":"ContainerStarted","Data":"0d049a025e78d2caa14568f9f2a223092da40a887669888692304b5796b3cb57"} Dec 03 11:37:46 crc kubenswrapper[4756]: I1203 11:37:46.678485 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" podStartSLOduration=3.011485752 podStartE2EDuration="3.678466134s" podCreationTimestamp="2025-12-03 11:37:43 +0000 UTC" firstStartedPulling="2025-12-03 11:37:44.823251136 +0000 UTC m=+2675.853252370" lastFinishedPulling="2025-12-03 11:37:45.490231508 +0000 UTC m=+2676.520232752" observedRunningTime="2025-12-03 11:37:46.675520631 +0000 UTC m=+2677.705521895" watchObservedRunningTime="2025-12-03 11:37:46.678466134 +0000 UTC m=+2677.708467388" Dec 03 11:38:50 crc kubenswrapper[4756]: I1203 11:38:50.537804 4756 scope.go:117] "RemoveContainer" containerID="6519cdb5caf3ebbe096399aedca48d408ca5fc2d5343279e29d56c15dc1bc3c6" Dec 03 11:38:50 crc kubenswrapper[4756]: I1203 11:38:50.562725 4756 scope.go:117] "RemoveContainer" containerID="5f43563fb481de024d1d2c4e8c92349374c6067b040c80875b5e24911e4ea139" Dec 03 11:38:50 crc kubenswrapper[4756]: I1203 11:38:50.586857 4756 scope.go:117] "RemoveContainer" containerID="54085000256ea07d048a9331658465be720398581cb02adc5afc77af03e1cdb2" Dec 03 11:38:52 crc kubenswrapper[4756]: I1203 11:38:52.606859 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:38:52 crc kubenswrapper[4756]: I1203 11:38:52.607905 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:39:10 crc kubenswrapper[4756]: I1203 11:39:10.256881 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-skrwt"] Dec 03 11:39:10 crc kubenswrapper[4756]: I1203 11:39:10.264041 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:10 crc kubenswrapper[4756]: I1203 11:39:10.283667 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skrwt"] Dec 03 11:39:10 crc kubenswrapper[4756]: I1203 11:39:10.457630 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gbz2\" (UniqueName: \"kubernetes.io/projected/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-kube-api-access-2gbz2\") pod \"redhat-operators-skrwt\" (UID: \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\") " pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:10 crc kubenswrapper[4756]: I1203 11:39:10.457903 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-utilities\") pod \"redhat-operators-skrwt\" (UID: \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\") " pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:10 crc kubenswrapper[4756]: I1203 11:39:10.458013 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-catalog-content\") pod \"redhat-operators-skrwt\" (UID: \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\") " pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:10 crc kubenswrapper[4756]: I1203 11:39:10.560015 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-utilities\") pod \"redhat-operators-skrwt\" (UID: \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\") " pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:10 crc kubenswrapper[4756]: I1203 11:39:10.560106 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-catalog-content\") pod \"redhat-operators-skrwt\" (UID: \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\") " pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:10 crc kubenswrapper[4756]: I1203 11:39:10.560183 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gbz2\" (UniqueName: \"kubernetes.io/projected/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-kube-api-access-2gbz2\") pod \"redhat-operators-skrwt\" (UID: \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\") " pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:10 crc kubenswrapper[4756]: I1203 11:39:10.560694 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-utilities\") pod \"redhat-operators-skrwt\" (UID: \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\") " pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:10 crc kubenswrapper[4756]: I1203 11:39:10.560853 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-catalog-content\") pod \"redhat-operators-skrwt\" (UID: \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\") " pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:10 crc kubenswrapper[4756]: I1203 11:39:10.601330 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gbz2\" (UniqueName: \"kubernetes.io/projected/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-kube-api-access-2gbz2\") pod \"redhat-operators-skrwt\" (UID: \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\") " pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:10 crc kubenswrapper[4756]: I1203 11:39:10.638310 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:11 crc kubenswrapper[4756]: I1203 11:39:11.158867 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skrwt"] Dec 03 11:39:11 crc kubenswrapper[4756]: I1203 11:39:11.566370 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skrwt" event={"ID":"6cfd9edc-15da-4f47-b627-c65fb1bdda2f","Type":"ContainerStarted","Data":"012e9b52ed8f873e0c236f5f55a211dc5bb9c9f849ddbd986005e2a0524c5f15"} Dec 03 11:39:12 crc kubenswrapper[4756]: I1203 11:39:12.582196 4756 generic.go:334] "Generic (PLEG): container finished" podID="6cfd9edc-15da-4f47-b627-c65fb1bdda2f" containerID="180872e8e4feaaf43a0a71b3141108faff9e48af9401b3eaec4e1de86ab44dd2" exitCode=0 Dec 03 11:39:12 crc kubenswrapper[4756]: I1203 11:39:12.582297 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skrwt" event={"ID":"6cfd9edc-15da-4f47-b627-c65fb1bdda2f","Type":"ContainerDied","Data":"180872e8e4feaaf43a0a71b3141108faff9e48af9401b3eaec4e1de86ab44dd2"} Dec 03 11:39:15 crc kubenswrapper[4756]: I1203 11:39:15.625519 4756 generic.go:334] "Generic (PLEG): container finished" podID="6cfd9edc-15da-4f47-b627-c65fb1bdda2f" containerID="2fcb8b6bbf5300b1d2c902125d9fb237ed2ca453005df9839ecc24f765fec685" exitCode=0 Dec 03 11:39:15 crc kubenswrapper[4756]: I1203 11:39:15.625788 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skrwt" event={"ID":"6cfd9edc-15da-4f47-b627-c65fb1bdda2f","Type":"ContainerDied","Data":"2fcb8b6bbf5300b1d2c902125d9fb237ed2ca453005df9839ecc24f765fec685"} Dec 03 11:39:17 crc kubenswrapper[4756]: I1203 11:39:17.654777 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skrwt" event={"ID":"6cfd9edc-15da-4f47-b627-c65fb1bdda2f","Type":"ContainerStarted","Data":"1bd8d4a1ed5927cb7d6f762599148cb3704ce75ebee4eec1c3dea0992e3aa302"} Dec 03 11:39:17 crc kubenswrapper[4756]: I1203 11:39:17.687052 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-skrwt" podStartSLOduration=3.497768015 podStartE2EDuration="7.687013731s" podCreationTimestamp="2025-12-03 11:39:10 +0000 UTC" firstStartedPulling="2025-12-03 11:39:12.585125587 +0000 UTC m=+2763.615126841" lastFinishedPulling="2025-12-03 11:39:16.774371313 +0000 UTC m=+2767.804372557" observedRunningTime="2025-12-03 11:39:17.676631835 +0000 UTC m=+2768.706633099" watchObservedRunningTime="2025-12-03 11:39:17.687013731 +0000 UTC m=+2768.717014985" Dec 03 11:39:20 crc kubenswrapper[4756]: I1203 11:39:20.639236 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:20 crc kubenswrapper[4756]: I1203 11:39:20.646301 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:21 crc kubenswrapper[4756]: I1203 11:39:21.693975 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-skrwt" podUID="6cfd9edc-15da-4f47-b627-c65fb1bdda2f" containerName="registry-server" probeResult="failure" output=< Dec 03 11:39:21 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Dec 03 11:39:21 crc kubenswrapper[4756]: > Dec 03 11:39:22 crc kubenswrapper[4756]: I1203 11:39:22.607435 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:39:22 crc kubenswrapper[4756]: I1203 11:39:22.608056 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:39:30 crc kubenswrapper[4756]: I1203 11:39:30.695507 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:30 crc kubenswrapper[4756]: I1203 11:39:30.765638 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:30 crc kubenswrapper[4756]: I1203 11:39:30.959448 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skrwt"] Dec 03 11:39:31 crc kubenswrapper[4756]: I1203 11:39:31.790665 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-skrwt" podUID="6cfd9edc-15da-4f47-b627-c65fb1bdda2f" containerName="registry-server" containerID="cri-o://1bd8d4a1ed5927cb7d6f762599148cb3704ce75ebee4eec1c3dea0992e3aa302" gracePeriod=2 Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.336724 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.461461 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-catalog-content\") pod \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\" (UID: \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\") " Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.461657 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gbz2\" (UniqueName: \"kubernetes.io/projected/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-kube-api-access-2gbz2\") pod \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\" (UID: \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\") " Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.461785 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-utilities\") pod \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\" (UID: \"6cfd9edc-15da-4f47-b627-c65fb1bdda2f\") " Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.462975 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-utilities" (OuterVolumeSpecName: "utilities") pod "6cfd9edc-15da-4f47-b627-c65fb1bdda2f" (UID: "6cfd9edc-15da-4f47-b627-c65fb1bdda2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.474227 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-kube-api-access-2gbz2" (OuterVolumeSpecName: "kube-api-access-2gbz2") pod "6cfd9edc-15da-4f47-b627-c65fb1bdda2f" (UID: "6cfd9edc-15da-4f47-b627-c65fb1bdda2f"). InnerVolumeSpecName "kube-api-access-2gbz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.564405 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gbz2\" (UniqueName: \"kubernetes.io/projected/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-kube-api-access-2gbz2\") on node \"crc\" DevicePath \"\"" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.564449 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.602655 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cfd9edc-15da-4f47-b627-c65fb1bdda2f" (UID: "6cfd9edc-15da-4f47-b627-c65fb1bdda2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.667489 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cfd9edc-15da-4f47-b627-c65fb1bdda2f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.802773 4756 generic.go:334] "Generic (PLEG): container finished" podID="6cfd9edc-15da-4f47-b627-c65fb1bdda2f" containerID="1bd8d4a1ed5927cb7d6f762599148cb3704ce75ebee4eec1c3dea0992e3aa302" exitCode=0 Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.802824 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skrwt" event={"ID":"6cfd9edc-15da-4f47-b627-c65fb1bdda2f","Type":"ContainerDied","Data":"1bd8d4a1ed5927cb7d6f762599148cb3704ce75ebee4eec1c3dea0992e3aa302"} Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.802871 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skrwt" event={"ID":"6cfd9edc-15da-4f47-b627-c65fb1bdda2f","Type":"ContainerDied","Data":"012e9b52ed8f873e0c236f5f55a211dc5bb9c9f849ddbd986005e2a0524c5f15"} Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.802867 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skrwt" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.802893 4756 scope.go:117] "RemoveContainer" containerID="1bd8d4a1ed5927cb7d6f762599148cb3704ce75ebee4eec1c3dea0992e3aa302" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.841851 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skrwt"] Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.843567 4756 scope.go:117] "RemoveContainer" containerID="2fcb8b6bbf5300b1d2c902125d9fb237ed2ca453005df9839ecc24f765fec685" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.853805 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-skrwt"] Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.884528 4756 scope.go:117] "RemoveContainer" containerID="180872e8e4feaaf43a0a71b3141108faff9e48af9401b3eaec4e1de86ab44dd2" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.935504 4756 scope.go:117] "RemoveContainer" containerID="1bd8d4a1ed5927cb7d6f762599148cb3704ce75ebee4eec1c3dea0992e3aa302" Dec 03 11:39:32 crc kubenswrapper[4756]: E1203 11:39:32.937507 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bd8d4a1ed5927cb7d6f762599148cb3704ce75ebee4eec1c3dea0992e3aa302\": container with ID starting with 1bd8d4a1ed5927cb7d6f762599148cb3704ce75ebee4eec1c3dea0992e3aa302 not found: ID does not exist" containerID="1bd8d4a1ed5927cb7d6f762599148cb3704ce75ebee4eec1c3dea0992e3aa302" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.937556 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bd8d4a1ed5927cb7d6f762599148cb3704ce75ebee4eec1c3dea0992e3aa302"} err="failed to get container status \"1bd8d4a1ed5927cb7d6f762599148cb3704ce75ebee4eec1c3dea0992e3aa302\": rpc error: code = NotFound desc = could not find container \"1bd8d4a1ed5927cb7d6f762599148cb3704ce75ebee4eec1c3dea0992e3aa302\": container with ID starting with 1bd8d4a1ed5927cb7d6f762599148cb3704ce75ebee4eec1c3dea0992e3aa302 not found: ID does not exist" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.937588 4756 scope.go:117] "RemoveContainer" containerID="2fcb8b6bbf5300b1d2c902125d9fb237ed2ca453005df9839ecc24f765fec685" Dec 03 11:39:32 crc kubenswrapper[4756]: E1203 11:39:32.937811 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fcb8b6bbf5300b1d2c902125d9fb237ed2ca453005df9839ecc24f765fec685\": container with ID starting with 2fcb8b6bbf5300b1d2c902125d9fb237ed2ca453005df9839ecc24f765fec685 not found: ID does not exist" containerID="2fcb8b6bbf5300b1d2c902125d9fb237ed2ca453005df9839ecc24f765fec685" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.937842 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcb8b6bbf5300b1d2c902125d9fb237ed2ca453005df9839ecc24f765fec685"} err="failed to get container status \"2fcb8b6bbf5300b1d2c902125d9fb237ed2ca453005df9839ecc24f765fec685\": rpc error: code = NotFound desc = could not find container \"2fcb8b6bbf5300b1d2c902125d9fb237ed2ca453005df9839ecc24f765fec685\": container with ID starting with 2fcb8b6bbf5300b1d2c902125d9fb237ed2ca453005df9839ecc24f765fec685 not found: ID does not exist" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.937861 4756 scope.go:117] "RemoveContainer" containerID="180872e8e4feaaf43a0a71b3141108faff9e48af9401b3eaec4e1de86ab44dd2" Dec 03 11:39:32 crc kubenswrapper[4756]: E1203 11:39:32.938385 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"180872e8e4feaaf43a0a71b3141108faff9e48af9401b3eaec4e1de86ab44dd2\": container with ID starting with 180872e8e4feaaf43a0a71b3141108faff9e48af9401b3eaec4e1de86ab44dd2 not found: ID does not exist" containerID="180872e8e4feaaf43a0a71b3141108faff9e48af9401b3eaec4e1de86ab44dd2" Dec 03 11:39:32 crc kubenswrapper[4756]: I1203 11:39:32.938409 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"180872e8e4feaaf43a0a71b3141108faff9e48af9401b3eaec4e1de86ab44dd2"} err="failed to get container status \"180872e8e4feaaf43a0a71b3141108faff9e48af9401b3eaec4e1de86ab44dd2\": rpc error: code = NotFound desc = could not find container \"180872e8e4feaaf43a0a71b3141108faff9e48af9401b3eaec4e1de86ab44dd2\": container with ID starting with 180872e8e4feaaf43a0a71b3141108faff9e48af9401b3eaec4e1de86ab44dd2 not found: ID does not exist" Dec 03 11:39:33 crc kubenswrapper[4756]: I1203 11:39:33.247080 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cfd9edc-15da-4f47-b627-c65fb1bdda2f" path="/var/lib/kubelet/pods/6cfd9edc-15da-4f47-b627-c65fb1bdda2f/volumes" Dec 03 11:39:52 crc kubenswrapper[4756]: I1203 11:39:52.607458 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:39:52 crc kubenswrapper[4756]: I1203 11:39:52.608553 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:39:52 crc kubenswrapper[4756]: I1203 11:39:52.608633 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 11:39:52 crc kubenswrapper[4756]: I1203 11:39:52.609710 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2d2339d48304c122e5c986eb5f41e55bfa042f513c718620aa2da2e3c4adb10"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 11:39:52 crc kubenswrapper[4756]: I1203 11:39:52.609822 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://f2d2339d48304c122e5c986eb5f41e55bfa042f513c718620aa2da2e3c4adb10" gracePeriod=600 Dec 03 11:39:55 crc kubenswrapper[4756]: I1203 11:39:55.073974 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="f2d2339d48304c122e5c986eb5f41e55bfa042f513c718620aa2da2e3c4adb10" exitCode=0 Dec 03 11:39:55 crc kubenswrapper[4756]: I1203 11:39:55.074018 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"f2d2339d48304c122e5c986eb5f41e55bfa042f513c718620aa2da2e3c4adb10"} Dec 03 11:39:55 crc kubenswrapper[4756]: I1203 11:39:55.075250 4756 scope.go:117] "RemoveContainer" containerID="095459a23ecd723d3c387e74e7a6dc5f748dffa8be15994f2167ff557bbe08df" Dec 03 11:39:56 crc kubenswrapper[4756]: I1203 11:39:56.086071 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b"} Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.702202 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ng8qx"] Dec 03 11:40:19 crc kubenswrapper[4756]: E1203 11:40:19.703279 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfd9edc-15da-4f47-b627-c65fb1bdda2f" containerName="extract-content" Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.703299 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfd9edc-15da-4f47-b627-c65fb1bdda2f" containerName="extract-content" Dec 03 11:40:19 crc kubenswrapper[4756]: E1203 11:40:19.703341 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfd9edc-15da-4f47-b627-c65fb1bdda2f" containerName="extract-utilities" Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.703349 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfd9edc-15da-4f47-b627-c65fb1bdda2f" containerName="extract-utilities" Dec 03 11:40:19 crc kubenswrapper[4756]: E1203 11:40:19.703372 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfd9edc-15da-4f47-b627-c65fb1bdda2f" containerName="registry-server" Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.703380 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfd9edc-15da-4f47-b627-c65fb1bdda2f" containerName="registry-server" Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.703615 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cfd9edc-15da-4f47-b627-c65fb1bdda2f" containerName="registry-server" Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.710166 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.722803 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-utilities\") pod \"redhat-marketplace-ng8qx\" (UID: \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\") " pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.722903 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-catalog-content\") pod \"redhat-marketplace-ng8qx\" (UID: \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\") " pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.723092 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znqk4\" (UniqueName: \"kubernetes.io/projected/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-kube-api-access-znqk4\") pod \"redhat-marketplace-ng8qx\" (UID: \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\") " pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.754621 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ng8qx"] Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.825881 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-utilities\") pod \"redhat-marketplace-ng8qx\" (UID: \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\") " pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.826044 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-catalog-content\") pod \"redhat-marketplace-ng8qx\" (UID: \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\") " pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.826143 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znqk4\" (UniqueName: \"kubernetes.io/projected/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-kube-api-access-znqk4\") pod \"redhat-marketplace-ng8qx\" (UID: \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\") " pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.827220 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-utilities\") pod \"redhat-marketplace-ng8qx\" (UID: \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\") " pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.827558 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-catalog-content\") pod \"redhat-marketplace-ng8qx\" (UID: \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\") " pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:19 crc kubenswrapper[4756]: I1203 11:40:19.856428 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znqk4\" (UniqueName: \"kubernetes.io/projected/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-kube-api-access-znqk4\") pod \"redhat-marketplace-ng8qx\" (UID: \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\") " pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:20 crc kubenswrapper[4756]: I1203 11:40:20.040308 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:20 crc kubenswrapper[4756]: I1203 11:40:20.526485 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ng8qx"] Dec 03 11:40:21 crc kubenswrapper[4756]: I1203 11:40:21.388479 4756 generic.go:334] "Generic (PLEG): container finished" podID="968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" containerID="ab11a89a433cd2214a8e792ff1b6ab292da96345a56ef50acf525eef895db3b7" exitCode=0 Dec 03 11:40:21 crc kubenswrapper[4756]: I1203 11:40:21.388717 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ng8qx" event={"ID":"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e","Type":"ContainerDied","Data":"ab11a89a433cd2214a8e792ff1b6ab292da96345a56ef50acf525eef895db3b7"} Dec 03 11:40:21 crc kubenswrapper[4756]: I1203 11:40:21.388791 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ng8qx" event={"ID":"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e","Type":"ContainerStarted","Data":"f29a61cd6aebec327695a89fd707ec5a749d12b9ae17d8a79519181757848dd0"} Dec 03 11:40:24 crc kubenswrapper[4756]: I1203 11:40:24.416417 4756 generic.go:334] "Generic (PLEG): container finished" podID="968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" containerID="4c7c3085e209b4b19102bec0a8e4900cc7eb18f6989102d94b29cf18095486fc" exitCode=0 Dec 03 11:40:24 crc kubenswrapper[4756]: I1203 11:40:24.416491 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ng8qx" event={"ID":"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e","Type":"ContainerDied","Data":"4c7c3085e209b4b19102bec0a8e4900cc7eb18f6989102d94b29cf18095486fc"} Dec 03 11:40:28 crc kubenswrapper[4756]: I1203 11:40:28.459341 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ng8qx" event={"ID":"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e","Type":"ContainerStarted","Data":"1a64a2444cafa04d5ad5379c84730b98a9907389cbdce13667397d4231a2fa80"} Dec 03 11:40:28 crc kubenswrapper[4756]: I1203 11:40:28.490517 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ng8qx" podStartSLOduration=3.295760275 podStartE2EDuration="9.490487767s" podCreationTimestamp="2025-12-03 11:40:19 +0000 UTC" firstStartedPulling="2025-12-03 11:40:21.39088812 +0000 UTC m=+2832.420889384" lastFinishedPulling="2025-12-03 11:40:27.585615632 +0000 UTC m=+2838.615616876" observedRunningTime="2025-12-03 11:40:28.480386939 +0000 UTC m=+2839.510388223" watchObservedRunningTime="2025-12-03 11:40:28.490487767 +0000 UTC m=+2839.520489051" Dec 03 11:40:30 crc kubenswrapper[4756]: I1203 11:40:30.041747 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:30 crc kubenswrapper[4756]: I1203 11:40:30.042224 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:30 crc kubenswrapper[4756]: I1203 11:40:30.116059 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:40 crc kubenswrapper[4756]: I1203 11:40:40.094226 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:40 crc kubenswrapper[4756]: I1203 11:40:40.142978 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ng8qx"] Dec 03 11:40:40 crc kubenswrapper[4756]: I1203 11:40:40.576320 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ng8qx" podUID="968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" containerName="registry-server" containerID="cri-o://1a64a2444cafa04d5ad5379c84730b98a9907389cbdce13667397d4231a2fa80" gracePeriod=2 Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.552220 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.596453 4756 generic.go:334] "Generic (PLEG): container finished" podID="968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" containerID="1a64a2444cafa04d5ad5379c84730b98a9907389cbdce13667397d4231a2fa80" exitCode=0 Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.596519 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ng8qx" event={"ID":"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e","Type":"ContainerDied","Data":"1a64a2444cafa04d5ad5379c84730b98a9907389cbdce13667397d4231a2fa80"} Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.596558 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ng8qx" event={"ID":"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e","Type":"ContainerDied","Data":"f29a61cd6aebec327695a89fd707ec5a749d12b9ae17d8a79519181757848dd0"} Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.596582 4756 scope.go:117] "RemoveContainer" containerID="1a64a2444cafa04d5ad5379c84730b98a9907389cbdce13667397d4231a2fa80" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.596775 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ng8qx" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.627179 4756 scope.go:117] "RemoveContainer" containerID="4c7c3085e209b4b19102bec0a8e4900cc7eb18f6989102d94b29cf18095486fc" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.650997 4756 scope.go:117] "RemoveContainer" containerID="ab11a89a433cd2214a8e792ff1b6ab292da96345a56ef50acf525eef895db3b7" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.698041 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znqk4\" (UniqueName: \"kubernetes.io/projected/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-kube-api-access-znqk4\") pod \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\" (UID: \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\") " Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.705434 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-kube-api-access-znqk4" (OuterVolumeSpecName: "kube-api-access-znqk4") pod "968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" (UID: "968bcaa6-baeb-4e34-bed9-20d3b5e6f17e"). InnerVolumeSpecName "kube-api-access-znqk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.707729 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-utilities\") pod \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\" (UID: \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\") " Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.708202 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-catalog-content\") pod \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\" (UID: \"968bcaa6-baeb-4e34-bed9-20d3b5e6f17e\") " Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.708809 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-utilities" (OuterVolumeSpecName: "utilities") pod "968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" (UID: "968bcaa6-baeb-4e34-bed9-20d3b5e6f17e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.709581 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.709661 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znqk4\" (UniqueName: \"kubernetes.io/projected/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-kube-api-access-znqk4\") on node \"crc\" DevicePath \"\"" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.712046 4756 scope.go:117] "RemoveContainer" containerID="1a64a2444cafa04d5ad5379c84730b98a9907389cbdce13667397d4231a2fa80" Dec 03 11:40:41 crc kubenswrapper[4756]: E1203 11:40:41.712733 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a64a2444cafa04d5ad5379c84730b98a9907389cbdce13667397d4231a2fa80\": container with ID starting with 1a64a2444cafa04d5ad5379c84730b98a9907389cbdce13667397d4231a2fa80 not found: ID does not exist" containerID="1a64a2444cafa04d5ad5379c84730b98a9907389cbdce13667397d4231a2fa80" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.712790 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a64a2444cafa04d5ad5379c84730b98a9907389cbdce13667397d4231a2fa80"} err="failed to get container status \"1a64a2444cafa04d5ad5379c84730b98a9907389cbdce13667397d4231a2fa80\": rpc error: code = NotFound desc = could not find container \"1a64a2444cafa04d5ad5379c84730b98a9907389cbdce13667397d4231a2fa80\": container with ID starting with 1a64a2444cafa04d5ad5379c84730b98a9907389cbdce13667397d4231a2fa80 not found: ID does not exist" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.712823 4756 scope.go:117] "RemoveContainer" containerID="4c7c3085e209b4b19102bec0a8e4900cc7eb18f6989102d94b29cf18095486fc" Dec 03 11:40:41 crc kubenswrapper[4756]: E1203 11:40:41.713167 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c7c3085e209b4b19102bec0a8e4900cc7eb18f6989102d94b29cf18095486fc\": container with ID starting with 4c7c3085e209b4b19102bec0a8e4900cc7eb18f6989102d94b29cf18095486fc not found: ID does not exist" containerID="4c7c3085e209b4b19102bec0a8e4900cc7eb18f6989102d94b29cf18095486fc" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.713196 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7c3085e209b4b19102bec0a8e4900cc7eb18f6989102d94b29cf18095486fc"} err="failed to get container status \"4c7c3085e209b4b19102bec0a8e4900cc7eb18f6989102d94b29cf18095486fc\": rpc error: code = NotFound desc = could not find container \"4c7c3085e209b4b19102bec0a8e4900cc7eb18f6989102d94b29cf18095486fc\": container with ID starting with 4c7c3085e209b4b19102bec0a8e4900cc7eb18f6989102d94b29cf18095486fc not found: ID does not exist" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.713212 4756 scope.go:117] "RemoveContainer" containerID="ab11a89a433cd2214a8e792ff1b6ab292da96345a56ef50acf525eef895db3b7" Dec 03 11:40:41 crc kubenswrapper[4756]: E1203 11:40:41.713468 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab11a89a433cd2214a8e792ff1b6ab292da96345a56ef50acf525eef895db3b7\": container with ID starting with ab11a89a433cd2214a8e792ff1b6ab292da96345a56ef50acf525eef895db3b7 not found: ID does not exist" containerID="ab11a89a433cd2214a8e792ff1b6ab292da96345a56ef50acf525eef895db3b7" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.713491 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab11a89a433cd2214a8e792ff1b6ab292da96345a56ef50acf525eef895db3b7"} err="failed to get container status \"ab11a89a433cd2214a8e792ff1b6ab292da96345a56ef50acf525eef895db3b7\": rpc error: code = NotFound desc = could not find container \"ab11a89a433cd2214a8e792ff1b6ab292da96345a56ef50acf525eef895db3b7\": container with ID starting with ab11a89a433cd2214a8e792ff1b6ab292da96345a56ef50acf525eef895db3b7 not found: ID does not exist" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.727905 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" (UID: "968bcaa6-baeb-4e34-bed9-20d3b5e6f17e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.811736 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.933351 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ng8qx"] Dec 03 11:40:41 crc kubenswrapper[4756]: I1203 11:40:41.941316 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ng8qx"] Dec 03 11:40:43 crc kubenswrapper[4756]: I1203 11:40:43.245803 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" path="/var/lib/kubelet/pods/968bcaa6-baeb-4e34-bed9-20d3b5e6f17e/volumes" Dec 03 11:40:49 crc kubenswrapper[4756]: I1203 11:40:49.685550 4756 generic.go:334] "Generic (PLEG): container finished" podID="edbbebac-e053-45ad-9b17-83d0e55fba86" containerID="0d049a025e78d2caa14568f9f2a223092da40a887669888692304b5796b3cb57" exitCode=0 Dec 03 11:40:49 crc kubenswrapper[4756]: I1203 11:40:49.685666 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" event={"ID":"edbbebac-e053-45ad-9b17-83d0e55fba86","Type":"ContainerDied","Data":"0d049a025e78d2caa14568f9f2a223092da40a887669888692304b5796b3cb57"} Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.131709 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.240559 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-combined-ca-bundle\") pod \"edbbebac-e053-45ad-9b17-83d0e55fba86\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.240644 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-cell1-compute-config-0\") pod \"edbbebac-e053-45ad-9b17-83d0e55fba86\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.240713 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-cell1-compute-config-1\") pod \"edbbebac-e053-45ad-9b17-83d0e55fba86\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.240748 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-extra-config-0\") pod \"edbbebac-e053-45ad-9b17-83d0e55fba86\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.240847 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7f5w\" (UniqueName: \"kubernetes.io/projected/edbbebac-e053-45ad-9b17-83d0e55fba86-kube-api-access-n7f5w\") pod \"edbbebac-e053-45ad-9b17-83d0e55fba86\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.240872 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-migration-ssh-key-0\") pod \"edbbebac-e053-45ad-9b17-83d0e55fba86\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.240921 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-ssh-key\") pod \"edbbebac-e053-45ad-9b17-83d0e55fba86\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.241013 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-migration-ssh-key-1\") pod \"edbbebac-e053-45ad-9b17-83d0e55fba86\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.241071 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-inventory\") pod \"edbbebac-e053-45ad-9b17-83d0e55fba86\" (UID: \"edbbebac-e053-45ad-9b17-83d0e55fba86\") " Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.247845 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edbbebac-e053-45ad-9b17-83d0e55fba86-kube-api-access-n7f5w" (OuterVolumeSpecName: "kube-api-access-n7f5w") pod "edbbebac-e053-45ad-9b17-83d0e55fba86" (UID: "edbbebac-e053-45ad-9b17-83d0e55fba86"). InnerVolumeSpecName "kube-api-access-n7f5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.249335 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "edbbebac-e053-45ad-9b17-83d0e55fba86" (UID: "edbbebac-e053-45ad-9b17-83d0e55fba86"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.273724 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-inventory" (OuterVolumeSpecName: "inventory") pod "edbbebac-e053-45ad-9b17-83d0e55fba86" (UID: "edbbebac-e053-45ad-9b17-83d0e55fba86"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.277105 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "edbbebac-e053-45ad-9b17-83d0e55fba86" (UID: "edbbebac-e053-45ad-9b17-83d0e55fba86"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.277611 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "edbbebac-e053-45ad-9b17-83d0e55fba86" (UID: "edbbebac-e053-45ad-9b17-83d0e55fba86"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.278855 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "edbbebac-e053-45ad-9b17-83d0e55fba86" (UID: "edbbebac-e053-45ad-9b17-83d0e55fba86"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.279284 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "edbbebac-e053-45ad-9b17-83d0e55fba86" (UID: "edbbebac-e053-45ad-9b17-83d0e55fba86"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.281607 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "edbbebac-e053-45ad-9b17-83d0e55fba86" (UID: "edbbebac-e053-45ad-9b17-83d0e55fba86"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.288201 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "edbbebac-e053-45ad-9b17-83d0e55fba86" (UID: "edbbebac-e053-45ad-9b17-83d0e55fba86"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.345014 4756 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.345057 4756 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.345074 4756 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.345087 4756 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.345102 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7f5w\" (UniqueName: \"kubernetes.io/projected/edbbebac-e053-45ad-9b17-83d0e55fba86-kube-api-access-n7f5w\") on node \"crc\" DevicePath \"\"" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.345115 4756 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.345128 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.345141 4756 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.345153 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edbbebac-e053-45ad-9b17-83d0e55fba86-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.706348 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" event={"ID":"edbbebac-e053-45ad-9b17-83d0e55fba86","Type":"ContainerDied","Data":"984ed19311e2e3d7f91e3e511e808a1de1f3ad9bf847939806059b3604887d6f"} Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.706696 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="984ed19311e2e3d7f91e3e511e808a1de1f3ad9bf847939806059b3604887d6f" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.706526 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jv7s6" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.814816 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh"] Dec 03 11:40:51 crc kubenswrapper[4756]: E1203 11:40:51.815307 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" containerName="registry-server" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.815327 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" containerName="registry-server" Dec 03 11:40:51 crc kubenswrapper[4756]: E1203 11:40:51.815344 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" containerName="extract-utilities" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.815351 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" containerName="extract-utilities" Dec 03 11:40:51 crc kubenswrapper[4756]: E1203 11:40:51.815381 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" containerName="extract-content" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.815392 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" containerName="extract-content" Dec 03 11:40:51 crc kubenswrapper[4756]: E1203 11:40:51.815405 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edbbebac-e053-45ad-9b17-83d0e55fba86" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.815412 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="edbbebac-e053-45ad-9b17-83d0e55fba86" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.815611 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="edbbebac-e053-45ad-9b17-83d0e55fba86" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.815628 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="968bcaa6-baeb-4e34-bed9-20d3b5e6f17e" containerName="registry-server" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.816352 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.819207 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.819256 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.819278 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.819486 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.823269 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qfzdt" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.836847 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh"] Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.868884 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.869283 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.869767 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq8xq\" (UniqueName: \"kubernetes.io/projected/9dab21b5-7428-46b5-8d98-956b18345f6d-kube-api-access-dq8xq\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.870282 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.870561 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.870934 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.871262 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.974679 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq8xq\" (UniqueName: \"kubernetes.io/projected/9dab21b5-7428-46b5-8d98-956b18345f6d-kube-api-access-dq8xq\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.974759 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.974824 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.974852 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.974938 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.975041 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.975107 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.980261 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.981626 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.982266 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.983065 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.983875 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.985486 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:51 crc kubenswrapper[4756]: I1203 11:40:51.999115 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq8xq\" (UniqueName: \"kubernetes.io/projected/9dab21b5-7428-46b5-8d98-956b18345f6d-kube-api-access-dq8xq\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-89lxh\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:52 crc kubenswrapper[4756]: I1203 11:40:52.137121 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:40:52 crc kubenswrapper[4756]: I1203 11:40:52.698626 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh"] Dec 03 11:40:53 crc kubenswrapper[4756]: I1203 11:40:53.735876 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" event={"ID":"9dab21b5-7428-46b5-8d98-956b18345f6d","Type":"ContainerStarted","Data":"157c49388c625f5eb7368eec514436f98acea877cfce029b31d858cad512abb7"} Dec 03 11:40:56 crc kubenswrapper[4756]: I1203 11:40:56.769844 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" event={"ID":"9dab21b5-7428-46b5-8d98-956b18345f6d","Type":"ContainerStarted","Data":"9313a8dbd8292a4f0981ad4b53ddd039efcf2c74eafb488f474414b1a575ab32"} Dec 03 11:40:56 crc kubenswrapper[4756]: I1203 11:40:56.805573 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" podStartSLOduration=2.719614567 podStartE2EDuration="5.805535184s" podCreationTimestamp="2025-12-03 11:40:51 +0000 UTC" firstStartedPulling="2025-12-03 11:40:52.71630529 +0000 UTC m=+2863.746306534" lastFinishedPulling="2025-12-03 11:40:55.802225907 +0000 UTC m=+2866.832227151" observedRunningTime="2025-12-03 11:40:56.79014946 +0000 UTC m=+2867.820150714" watchObservedRunningTime="2025-12-03 11:40:56.805535184 +0000 UTC m=+2867.835536438" Dec 03 11:42:22 crc kubenswrapper[4756]: I1203 11:42:22.607727 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:42:22 crc kubenswrapper[4756]: I1203 11:42:22.608791 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:42:52 crc kubenswrapper[4756]: I1203 11:42:52.607682 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:42:52 crc kubenswrapper[4756]: I1203 11:42:52.608300 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:43:22 crc kubenswrapper[4756]: I1203 11:43:22.607790 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:43:22 crc kubenswrapper[4756]: I1203 11:43:22.608376 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:43:22 crc kubenswrapper[4756]: I1203 11:43:22.608431 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 11:43:22 crc kubenswrapper[4756]: I1203 11:43:22.609226 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 11:43:22 crc kubenswrapper[4756]: I1203 11:43:22.609290 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" gracePeriod=600 Dec 03 11:43:22 crc kubenswrapper[4756]: E1203 11:43:22.728134 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:43:23 crc kubenswrapper[4756]: I1203 11:43:23.444373 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" exitCode=0 Dec 03 11:43:23 crc kubenswrapper[4756]: I1203 11:43:23.444443 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b"} Dec 03 11:43:23 crc kubenswrapper[4756]: I1203 11:43:23.444539 4756 scope.go:117] "RemoveContainer" containerID="f2d2339d48304c122e5c986eb5f41e55bfa042f513c718620aa2da2e3c4adb10" Dec 03 11:43:23 crc kubenswrapper[4756]: I1203 11:43:23.445664 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:43:23 crc kubenswrapper[4756]: E1203 11:43:23.446061 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:43:33 crc kubenswrapper[4756]: I1203 11:43:33.549874 4756 generic.go:334] "Generic (PLEG): container finished" podID="9dab21b5-7428-46b5-8d98-956b18345f6d" containerID="9313a8dbd8292a4f0981ad4b53ddd039efcf2c74eafb488f474414b1a575ab32" exitCode=0 Dec 03 11:43:33 crc kubenswrapper[4756]: I1203 11:43:33.549987 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" event={"ID":"9dab21b5-7428-46b5-8d98-956b18345f6d","Type":"ContainerDied","Data":"9313a8dbd8292a4f0981ad4b53ddd039efcf2c74eafb488f474414b1a575ab32"} Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.015178 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.163244 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-0\") pod \"9dab21b5-7428-46b5-8d98-956b18345f6d\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.163751 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-inventory\") pod \"9dab21b5-7428-46b5-8d98-956b18345f6d\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.163780 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-telemetry-combined-ca-bundle\") pod \"9dab21b5-7428-46b5-8d98-956b18345f6d\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.163845 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ssh-key\") pod \"9dab21b5-7428-46b5-8d98-956b18345f6d\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.163923 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-2\") pod \"9dab21b5-7428-46b5-8d98-956b18345f6d\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.164085 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-1\") pod \"9dab21b5-7428-46b5-8d98-956b18345f6d\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.164114 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq8xq\" (UniqueName: \"kubernetes.io/projected/9dab21b5-7428-46b5-8d98-956b18345f6d-kube-api-access-dq8xq\") pod \"9dab21b5-7428-46b5-8d98-956b18345f6d\" (UID: \"9dab21b5-7428-46b5-8d98-956b18345f6d\") " Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.172140 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9dab21b5-7428-46b5-8d98-956b18345f6d" (UID: "9dab21b5-7428-46b5-8d98-956b18345f6d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.173391 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dab21b5-7428-46b5-8d98-956b18345f6d-kube-api-access-dq8xq" (OuterVolumeSpecName: "kube-api-access-dq8xq") pod "9dab21b5-7428-46b5-8d98-956b18345f6d" (UID: "9dab21b5-7428-46b5-8d98-956b18345f6d"). InnerVolumeSpecName "kube-api-access-dq8xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.199502 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9dab21b5-7428-46b5-8d98-956b18345f6d" (UID: "9dab21b5-7428-46b5-8d98-956b18345f6d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.201211 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "9dab21b5-7428-46b5-8d98-956b18345f6d" (UID: "9dab21b5-7428-46b5-8d98-956b18345f6d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.202460 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-inventory" (OuterVolumeSpecName: "inventory") pod "9dab21b5-7428-46b5-8d98-956b18345f6d" (UID: "9dab21b5-7428-46b5-8d98-956b18345f6d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.203676 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "9dab21b5-7428-46b5-8d98-956b18345f6d" (UID: "9dab21b5-7428-46b5-8d98-956b18345f6d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.204694 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "9dab21b5-7428-46b5-8d98-956b18345f6d" (UID: "9dab21b5-7428-46b5-8d98-956b18345f6d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.269236 4756 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.269273 4756 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.269291 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.269304 4756 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.269319 4756 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.269329 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq8xq\" (UniqueName: \"kubernetes.io/projected/9dab21b5-7428-46b5-8d98-956b18345f6d-kube-api-access-dq8xq\") on node \"crc\" DevicePath \"\"" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.269340 4756 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9dab21b5-7428-46b5-8d98-956b18345f6d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.582008 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" event={"ID":"9dab21b5-7428-46b5-8d98-956b18345f6d","Type":"ContainerDied","Data":"157c49388c625f5eb7368eec514436f98acea877cfce029b31d858cad512abb7"} Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.582363 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="157c49388c625f5eb7368eec514436f98acea877cfce029b31d858cad512abb7" Dec 03 11:43:35 crc kubenswrapper[4756]: I1203 11:43:35.582147 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-89lxh" Dec 03 11:43:37 crc kubenswrapper[4756]: I1203 11:43:37.234738 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:43:37 crc kubenswrapper[4756]: E1203 11:43:37.236533 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:43:48 crc kubenswrapper[4756]: I1203 11:43:48.234286 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:43:48 crc kubenswrapper[4756]: E1203 11:43:48.235134 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.415733 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ssjdj"] Dec 03 11:43:55 crc kubenswrapper[4756]: E1203 11:43:55.418130 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dab21b5-7428-46b5-8d98-956b18345f6d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.418155 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dab21b5-7428-46b5-8d98-956b18345f6d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.418429 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dab21b5-7428-46b5-8d98-956b18345f6d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.420032 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.449014 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ssjdj"] Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.524983 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3609b14b-ffbe-45d5-818d-d6a01bf0b5d1-utilities\") pod \"certified-operators-ssjdj\" (UID: \"3609b14b-ffbe-45d5-818d-d6a01bf0b5d1\") " pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.525556 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3609b14b-ffbe-45d5-818d-d6a01bf0b5d1-catalog-content\") pod \"certified-operators-ssjdj\" (UID: \"3609b14b-ffbe-45d5-818d-d6a01bf0b5d1\") " pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.525970 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrctn\" (UniqueName: \"kubernetes.io/projected/3609b14b-ffbe-45d5-818d-d6a01bf0b5d1-kube-api-access-rrctn\") pod \"certified-operators-ssjdj\" (UID: \"3609b14b-ffbe-45d5-818d-d6a01bf0b5d1\") " pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.627062 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3609b14b-ffbe-45d5-818d-d6a01bf0b5d1-catalog-content\") pod \"certified-operators-ssjdj\" (UID: \"3609b14b-ffbe-45d5-818d-d6a01bf0b5d1\") " pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.627551 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrctn\" (UniqueName: \"kubernetes.io/projected/3609b14b-ffbe-45d5-818d-d6a01bf0b5d1-kube-api-access-rrctn\") pod \"certified-operators-ssjdj\" (UID: \"3609b14b-ffbe-45d5-818d-d6a01bf0b5d1\") " pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.627935 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3609b14b-ffbe-45d5-818d-d6a01bf0b5d1-utilities\") pod \"certified-operators-ssjdj\" (UID: \"3609b14b-ffbe-45d5-818d-d6a01bf0b5d1\") " pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.628750 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3609b14b-ffbe-45d5-818d-d6a01bf0b5d1-utilities\") pod \"certified-operators-ssjdj\" (UID: \"3609b14b-ffbe-45d5-818d-d6a01bf0b5d1\") " pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.629337 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3609b14b-ffbe-45d5-818d-d6a01bf0b5d1-catalog-content\") pod \"certified-operators-ssjdj\" (UID: \"3609b14b-ffbe-45d5-818d-d6a01bf0b5d1\") " pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.655609 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrctn\" (UniqueName: \"kubernetes.io/projected/3609b14b-ffbe-45d5-818d-d6a01bf0b5d1-kube-api-access-rrctn\") pod \"certified-operators-ssjdj\" (UID: \"3609b14b-ffbe-45d5-818d-d6a01bf0b5d1\") " pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:43:55 crc kubenswrapper[4756]: I1203 11:43:55.756599 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:43:56 crc kubenswrapper[4756]: I1203 11:43:56.250417 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ssjdj"] Dec 03 11:43:56 crc kubenswrapper[4756]: I1203 11:43:56.820938 4756 generic.go:334] "Generic (PLEG): container finished" podID="3609b14b-ffbe-45d5-818d-d6a01bf0b5d1" containerID="f0e82c7480ff82f42d9d47bf9328d0a194d3972a5281fd031027c32b930950fd" exitCode=0 Dec 03 11:43:56 crc kubenswrapper[4756]: I1203 11:43:56.821035 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssjdj" event={"ID":"3609b14b-ffbe-45d5-818d-d6a01bf0b5d1","Type":"ContainerDied","Data":"f0e82c7480ff82f42d9d47bf9328d0a194d3972a5281fd031027c32b930950fd"} Dec 03 11:43:56 crc kubenswrapper[4756]: I1203 11:43:56.821081 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssjdj" event={"ID":"3609b14b-ffbe-45d5-818d-d6a01bf0b5d1","Type":"ContainerStarted","Data":"c5ca3666bd68eaf46f541df93b4a77c34d5a2ca13998c5d2e7c79f45cc8fd4d7"} Dec 03 11:43:56 crc kubenswrapper[4756]: I1203 11:43:56.823708 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 11:43:59 crc kubenswrapper[4756]: I1203 11:43:59.258430 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:43:59 crc kubenswrapper[4756]: E1203 11:43:59.259407 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:44:02 crc kubenswrapper[4756]: I1203 11:44:02.892530 4756 generic.go:334] "Generic (PLEG): container finished" podID="3609b14b-ffbe-45d5-818d-d6a01bf0b5d1" containerID="24d1e4a731dbc66d8b5823370ea78c2c4eb7f90538b88e87fd6ca2df001b3c8f" exitCode=0 Dec 03 11:44:02 crc kubenswrapper[4756]: I1203 11:44:02.892607 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssjdj" event={"ID":"3609b14b-ffbe-45d5-818d-d6a01bf0b5d1","Type":"ContainerDied","Data":"24d1e4a731dbc66d8b5823370ea78c2c4eb7f90538b88e87fd6ca2df001b3c8f"} Dec 03 11:44:03 crc kubenswrapper[4756]: I1203 11:44:03.904492 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssjdj" event={"ID":"3609b14b-ffbe-45d5-818d-d6a01bf0b5d1","Type":"ContainerStarted","Data":"c6f6f10290973975458697589c9e270650d3e86f4046029ab618b7abb1b2819d"} Dec 03 11:44:03 crc kubenswrapper[4756]: I1203 11:44:03.927310 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ssjdj" podStartSLOduration=2.441346418 podStartE2EDuration="8.927283967s" podCreationTimestamp="2025-12-03 11:43:55 +0000 UTC" firstStartedPulling="2025-12-03 11:43:56.823472299 +0000 UTC m=+3047.853473543" lastFinishedPulling="2025-12-03 11:44:03.309409848 +0000 UTC m=+3054.339411092" observedRunningTime="2025-12-03 11:44:03.924265062 +0000 UTC m=+3054.954266306" watchObservedRunningTime="2025-12-03 11:44:03.927283967 +0000 UTC m=+3054.957285211" Dec 03 11:44:05 crc kubenswrapper[4756]: I1203 11:44:05.759749 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:44:05 crc kubenswrapper[4756]: I1203 11:44:05.760133 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:44:05 crc kubenswrapper[4756]: I1203 11:44:05.818445 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:44:12 crc kubenswrapper[4756]: I1203 11:44:12.234650 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:44:12 crc kubenswrapper[4756]: E1203 11:44:12.235753 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:44:15 crc kubenswrapper[4756]: I1203 11:44:15.804229 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ssjdj" Dec 03 11:44:15 crc kubenswrapper[4756]: I1203 11:44:15.886269 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ssjdj"] Dec 03 11:44:15 crc kubenswrapper[4756]: I1203 11:44:15.934371 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wrrz6"] Dec 03 11:44:15 crc kubenswrapper[4756]: I1203 11:44:15.934693 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wrrz6" podUID="3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" containerName="registry-server" containerID="cri-o://d646316d762167926c928bdd22c76beaf7bc2f2a99d1b83838aaa99d3e5e1cbf" gracePeriod=2 Dec 03 11:44:16 crc kubenswrapper[4756]: I1203 11:44:16.139548 4756 generic.go:334] "Generic (PLEG): container finished" podID="3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" containerID="d646316d762167926c928bdd22c76beaf7bc2f2a99d1b83838aaa99d3e5e1cbf" exitCode=0 Dec 03 11:44:16 crc kubenswrapper[4756]: I1203 11:44:16.139906 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrrz6" event={"ID":"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6","Type":"ContainerDied","Data":"d646316d762167926c928bdd22c76beaf7bc2f2a99d1b83838aaa99d3e5e1cbf"} Dec 03 11:44:16 crc kubenswrapper[4756]: I1203 11:44:16.513739 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:44:16 crc kubenswrapper[4756]: I1203 11:44:16.599734 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j88n7\" (UniqueName: \"kubernetes.io/projected/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-kube-api-access-j88n7\") pod \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\" (UID: \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\") " Dec 03 11:44:16 crc kubenswrapper[4756]: I1203 11:44:16.600175 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-utilities\") pod \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\" (UID: \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\") " Dec 03 11:44:16 crc kubenswrapper[4756]: I1203 11:44:16.600496 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-catalog-content\") pod \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\" (UID: \"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6\") " Dec 03 11:44:16 crc kubenswrapper[4756]: I1203 11:44:16.600597 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-utilities" (OuterVolumeSpecName: "utilities") pod "3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" (UID: "3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:44:16 crc kubenswrapper[4756]: I1203 11:44:16.601354 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:44:16 crc kubenswrapper[4756]: I1203 11:44:16.616185 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-kube-api-access-j88n7" (OuterVolumeSpecName: "kube-api-access-j88n7") pod "3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" (UID: "3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6"). InnerVolumeSpecName "kube-api-access-j88n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:44:16 crc kubenswrapper[4756]: I1203 11:44:16.673809 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" (UID: "3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:44:16 crc kubenswrapper[4756]: I1203 11:44:16.702798 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:44:16 crc kubenswrapper[4756]: I1203 11:44:16.702852 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j88n7\" (UniqueName: \"kubernetes.io/projected/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6-kube-api-access-j88n7\") on node \"crc\" DevicePath \"\"" Dec 03 11:44:17 crc kubenswrapper[4756]: I1203 11:44:17.153407 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrrz6" event={"ID":"3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6","Type":"ContainerDied","Data":"e9c655669ead6b4da37d15f5563e8d1f01d866336b0f87a5679e1143dbc4b2c5"} Dec 03 11:44:17 crc kubenswrapper[4756]: I1203 11:44:17.153456 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrrz6" Dec 03 11:44:17 crc kubenswrapper[4756]: I1203 11:44:17.153490 4756 scope.go:117] "RemoveContainer" containerID="d646316d762167926c928bdd22c76beaf7bc2f2a99d1b83838aaa99d3e5e1cbf" Dec 03 11:44:17 crc kubenswrapper[4756]: I1203 11:44:17.192884 4756 scope.go:117] "RemoveContainer" containerID="3b8919e283b78f96efc4da53c1fa718302640c12746e698f2318564e47685a4b" Dec 03 11:44:17 crc kubenswrapper[4756]: I1203 11:44:17.197533 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wrrz6"] Dec 03 11:44:17 crc kubenswrapper[4756]: I1203 11:44:17.209738 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wrrz6"] Dec 03 11:44:17 crc kubenswrapper[4756]: I1203 11:44:17.219189 4756 scope.go:117] "RemoveContainer" containerID="2870012ffb69260337f9ca1c209e7472e486e06eabe74b61ed878661d8dce8ab" Dec 03 11:44:17 crc kubenswrapper[4756]: I1203 11:44:17.244517 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" path="/var/lib/kubelet/pods/3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6/volumes" Dec 03 11:44:25 crc kubenswrapper[4756]: I1203 11:44:25.235272 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:44:25 crc kubenswrapper[4756]: E1203 11:44:25.236268 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.270770 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 11:44:31 crc kubenswrapper[4756]: E1203 11:44:31.271775 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" containerName="registry-server" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.271790 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" containerName="registry-server" Dec 03 11:44:31 crc kubenswrapper[4756]: E1203 11:44:31.271816 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" containerName="extract-utilities" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.271823 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" containerName="extract-utilities" Dec 03 11:44:31 crc kubenswrapper[4756]: E1203 11:44:31.271842 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" containerName="extract-content" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.271849 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" containerName="extract-content" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.272067 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="3517ce5b-ffee-43c6-9f19-ffb86c6b8ac6" containerName="registry-server" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.272819 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.278567 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.278911 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-cxqc5" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.279099 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.279381 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.287315 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.323040 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.323131 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f9a7368b-5739-4366-8a70-e33f19837e9a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.323171 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f9a7368b-5739-4366-8a70-e33f19837e9a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.323239 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.323264 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl8t9\" (UniqueName: \"kubernetes.io/projected/f9a7368b-5739-4366-8a70-e33f19837e9a-kube-api-access-sl8t9\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.323445 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.323638 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9a7368b-5739-4366-8a70-e33f19837e9a-config-data\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.323758 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f9a7368b-5739-4366-8a70-e33f19837e9a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.324018 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.426052 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.426167 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.426437 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.427104 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f9a7368b-5739-4366-8a70-e33f19837e9a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.427142 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f9a7368b-5739-4366-8a70-e33f19837e9a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.427216 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.427233 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl8t9\" (UniqueName: \"kubernetes.io/projected/f9a7368b-5739-4366-8a70-e33f19837e9a-kube-api-access-sl8t9\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.427294 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.427368 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9a7368b-5739-4366-8a70-e33f19837e9a-config-data\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.427419 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f9a7368b-5739-4366-8a70-e33f19837e9a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.427930 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f9a7368b-5739-4366-8a70-e33f19837e9a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.428577 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f9a7368b-5739-4366-8a70-e33f19837e9a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.428734 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f9a7368b-5739-4366-8a70-e33f19837e9a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.430046 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9a7368b-5739-4366-8a70-e33f19837e9a-config-data\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.433223 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.433353 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.436217 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.447788 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl8t9\" (UniqueName: \"kubernetes.io/projected/f9a7368b-5739-4366-8a70-e33f19837e9a-kube-api-access-sl8t9\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.455295 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " pod="openstack/tempest-tests-tempest" Dec 03 11:44:31 crc kubenswrapper[4756]: I1203 11:44:31.604444 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 11:44:32 crc kubenswrapper[4756]: I1203 11:44:32.094391 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 11:44:32 crc kubenswrapper[4756]: I1203 11:44:32.320414 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f9a7368b-5739-4366-8a70-e33f19837e9a","Type":"ContainerStarted","Data":"1f641ac88202bc17388b25ddcb75bc3d1310c21174651dc8d1030eefcea9b1c4"} Dec 03 11:44:40 crc kubenswrapper[4756]: I1203 11:44:40.235228 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:44:40 crc kubenswrapper[4756]: E1203 11:44:40.236642 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:44:55 crc kubenswrapper[4756]: I1203 11:44:55.234085 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:44:55 crc kubenswrapper[4756]: E1203 11:44:55.234733 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.150077 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5"] Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.151825 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.154175 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.154359 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.163220 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5"] Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.233800 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c229e97-2486-402a-ab22-acbba0c64fd0-secret-volume\") pod \"collect-profiles-29412705-xcjc5\" (UID: \"2c229e97-2486-402a-ab22-acbba0c64fd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.233844 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c229e97-2486-402a-ab22-acbba0c64fd0-config-volume\") pod \"collect-profiles-29412705-xcjc5\" (UID: \"2c229e97-2486-402a-ab22-acbba0c64fd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.233882 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4g2n\" (UniqueName: \"kubernetes.io/projected/2c229e97-2486-402a-ab22-acbba0c64fd0-kube-api-access-g4g2n\") pod \"collect-profiles-29412705-xcjc5\" (UID: \"2c229e97-2486-402a-ab22-acbba0c64fd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.335922 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c229e97-2486-402a-ab22-acbba0c64fd0-secret-volume\") pod \"collect-profiles-29412705-xcjc5\" (UID: \"2c229e97-2486-402a-ab22-acbba0c64fd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.335997 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c229e97-2486-402a-ab22-acbba0c64fd0-config-volume\") pod \"collect-profiles-29412705-xcjc5\" (UID: \"2c229e97-2486-402a-ab22-acbba0c64fd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.336039 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4g2n\" (UniqueName: \"kubernetes.io/projected/2c229e97-2486-402a-ab22-acbba0c64fd0-kube-api-access-g4g2n\") pod \"collect-profiles-29412705-xcjc5\" (UID: \"2c229e97-2486-402a-ab22-acbba0c64fd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.337975 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c229e97-2486-402a-ab22-acbba0c64fd0-config-volume\") pod \"collect-profiles-29412705-xcjc5\" (UID: \"2c229e97-2486-402a-ab22-acbba0c64fd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.345422 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c229e97-2486-402a-ab22-acbba0c64fd0-secret-volume\") pod \"collect-profiles-29412705-xcjc5\" (UID: \"2c229e97-2486-402a-ab22-acbba0c64fd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.353200 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4g2n\" (UniqueName: \"kubernetes.io/projected/2c229e97-2486-402a-ab22-acbba0c64fd0-kube-api-access-g4g2n\") pod \"collect-profiles-29412705-xcjc5\" (UID: \"2c229e97-2486-402a-ab22-acbba0c64fd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" Dec 03 11:45:00 crc kubenswrapper[4756]: I1203 11:45:00.481476 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" Dec 03 11:45:08 crc kubenswrapper[4756]: I1203 11:45:08.233676 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:45:08 crc kubenswrapper[4756]: E1203 11:45:08.234829 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:45:11 crc kubenswrapper[4756]: E1203 11:45:11.171541 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 03 11:45:11 crc kubenswrapper[4756]: E1203 11:45:11.172476 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sl8t9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(f9a7368b-5739-4366-8a70-e33f19837e9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 11:45:11 crc kubenswrapper[4756]: E1203 11:45:11.173711 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="f9a7368b-5739-4366-8a70-e33f19837e9a" Dec 03 11:45:11 crc kubenswrapper[4756]: I1203 11:45:11.610669 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5"] Dec 03 11:45:11 crc kubenswrapper[4756]: I1203 11:45:11.836568 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" event={"ID":"2c229e97-2486-402a-ab22-acbba0c64fd0","Type":"ContainerStarted","Data":"162f3ac06241dbd8c9ff61db2404ed011a1217050449c4a90c71949b5686ae38"} Dec 03 11:45:11 crc kubenswrapper[4756]: E1203 11:45:11.837673 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="f9a7368b-5739-4366-8a70-e33f19837e9a" Dec 03 11:45:12 crc kubenswrapper[4756]: I1203 11:45:12.845930 4756 generic.go:334] "Generic (PLEG): container finished" podID="2c229e97-2486-402a-ab22-acbba0c64fd0" containerID="034d6c8d2b652604ec9a3aac4f62027dde2849b12bf0d87468a121f0c5ca5b61" exitCode=0 Dec 03 11:45:12 crc kubenswrapper[4756]: I1203 11:45:12.846164 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" event={"ID":"2c229e97-2486-402a-ab22-acbba0c64fd0","Type":"ContainerDied","Data":"034d6c8d2b652604ec9a3aac4f62027dde2849b12bf0d87468a121f0c5ca5b61"} Dec 03 11:45:14 crc kubenswrapper[4756]: I1203 11:45:14.240201 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" Dec 03 11:45:14 crc kubenswrapper[4756]: I1203 11:45:14.430195 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c229e97-2486-402a-ab22-acbba0c64fd0-config-volume\") pod \"2c229e97-2486-402a-ab22-acbba0c64fd0\" (UID: \"2c229e97-2486-402a-ab22-acbba0c64fd0\") " Dec 03 11:45:14 crc kubenswrapper[4756]: I1203 11:45:14.430414 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c229e97-2486-402a-ab22-acbba0c64fd0-secret-volume\") pod \"2c229e97-2486-402a-ab22-acbba0c64fd0\" (UID: \"2c229e97-2486-402a-ab22-acbba0c64fd0\") " Dec 03 11:45:14 crc kubenswrapper[4756]: I1203 11:45:14.430463 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4g2n\" (UniqueName: \"kubernetes.io/projected/2c229e97-2486-402a-ab22-acbba0c64fd0-kube-api-access-g4g2n\") pod \"2c229e97-2486-402a-ab22-acbba0c64fd0\" (UID: \"2c229e97-2486-402a-ab22-acbba0c64fd0\") " Dec 03 11:45:14 crc kubenswrapper[4756]: I1203 11:45:14.432373 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c229e97-2486-402a-ab22-acbba0c64fd0-config-volume" (OuterVolumeSpecName: "config-volume") pod "2c229e97-2486-402a-ab22-acbba0c64fd0" (UID: "2c229e97-2486-402a-ab22-acbba0c64fd0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:45:14 crc kubenswrapper[4756]: I1203 11:45:14.437475 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c229e97-2486-402a-ab22-acbba0c64fd0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2c229e97-2486-402a-ab22-acbba0c64fd0" (UID: "2c229e97-2486-402a-ab22-acbba0c64fd0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:45:14 crc kubenswrapper[4756]: I1203 11:45:14.438867 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c229e97-2486-402a-ab22-acbba0c64fd0-kube-api-access-g4g2n" (OuterVolumeSpecName: "kube-api-access-g4g2n") pod "2c229e97-2486-402a-ab22-acbba0c64fd0" (UID: "2c229e97-2486-402a-ab22-acbba0c64fd0"). InnerVolumeSpecName "kube-api-access-g4g2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:45:14 crc kubenswrapper[4756]: I1203 11:45:14.534174 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c229e97-2486-402a-ab22-acbba0c64fd0-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 11:45:14 crc kubenswrapper[4756]: I1203 11:45:14.534233 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4g2n\" (UniqueName: \"kubernetes.io/projected/2c229e97-2486-402a-ab22-acbba0c64fd0-kube-api-access-g4g2n\") on node \"crc\" DevicePath \"\"" Dec 03 11:45:14 crc kubenswrapper[4756]: I1203 11:45:14.534254 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c229e97-2486-402a-ab22-acbba0c64fd0-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 11:45:14 crc kubenswrapper[4756]: I1203 11:45:14.873080 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" event={"ID":"2c229e97-2486-402a-ab22-acbba0c64fd0","Type":"ContainerDied","Data":"162f3ac06241dbd8c9ff61db2404ed011a1217050449c4a90c71949b5686ae38"} Dec 03 11:45:14 crc kubenswrapper[4756]: I1203 11:45:14.873661 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="162f3ac06241dbd8c9ff61db2404ed011a1217050449c4a90c71949b5686ae38" Dec 03 11:45:14 crc kubenswrapper[4756]: I1203 11:45:14.873133 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412705-xcjc5" Dec 03 11:45:15 crc kubenswrapper[4756]: I1203 11:45:15.332348 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff"] Dec 03 11:45:15 crc kubenswrapper[4756]: I1203 11:45:15.342695 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412660-x4cff"] Dec 03 11:45:17 crc kubenswrapper[4756]: I1203 11:45:17.248707 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e110b86a-f193-448b-bd73-1babbc0b175b" path="/var/lib/kubelet/pods/e110b86a-f193-448b-bd73-1babbc0b175b/volumes" Dec 03 11:45:20 crc kubenswrapper[4756]: I1203 11:45:20.235728 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:45:20 crc kubenswrapper[4756]: E1203 11:45:20.236317 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:45:22 crc kubenswrapper[4756]: I1203 11:45:22.731004 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 03 11:45:23 crc kubenswrapper[4756]: I1203 11:45:23.978052 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f9a7368b-5739-4366-8a70-e33f19837e9a","Type":"ContainerStarted","Data":"31a991c557e5be475e3427c18d1f2576c2ed210de3110c9f94ea830dd2e30cf9"} Dec 03 11:45:24 crc kubenswrapper[4756]: I1203 11:45:24.003812 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.388912239 podStartE2EDuration="54.003789101s" podCreationTimestamp="2025-12-03 11:44:30 +0000 UTC" firstStartedPulling="2025-12-03 11:44:32.112773343 +0000 UTC m=+3083.142774587" lastFinishedPulling="2025-12-03 11:45:22.727650205 +0000 UTC m=+3133.757651449" observedRunningTime="2025-12-03 11:45:23.995988486 +0000 UTC m=+3135.025989730" watchObservedRunningTime="2025-12-03 11:45:24.003789101 +0000 UTC m=+3135.033790345" Dec 03 11:45:32 crc kubenswrapper[4756]: I1203 11:45:32.233771 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:45:32 crc kubenswrapper[4756]: E1203 11:45:32.234573 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:45:45 crc kubenswrapper[4756]: I1203 11:45:45.237628 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:45:45 crc kubenswrapper[4756]: E1203 11:45:45.238559 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:45:57 crc kubenswrapper[4756]: I1203 11:45:57.234473 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:45:57 crc kubenswrapper[4756]: E1203 11:45:57.236318 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:46:09 crc kubenswrapper[4756]: I1203 11:46:09.245789 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:46:09 crc kubenswrapper[4756]: E1203 11:46:09.248342 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:46:11 crc kubenswrapper[4756]: I1203 11:46:11.110323 4756 scope.go:117] "RemoveContainer" containerID="bdbe457fa16f76a6dbbb6c839321592c22d89fb37aa4b11762951bc5da0e2f0c" Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.427002 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mt5xv"] Dec 03 11:46:13 crc kubenswrapper[4756]: E1203 11:46:13.428064 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c229e97-2486-402a-ab22-acbba0c64fd0" containerName="collect-profiles" Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.428086 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c229e97-2486-402a-ab22-acbba0c64fd0" containerName="collect-profiles" Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.428451 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c229e97-2486-402a-ab22-acbba0c64fd0" containerName="collect-profiles" Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.430717 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.438511 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mt5xv"] Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.511602 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6b4g\" (UniqueName: \"kubernetes.io/projected/f45bbafe-70d2-4437-b3c0-51eba0bfe073-kube-api-access-n6b4g\") pod \"community-operators-mt5xv\" (UID: \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\") " pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.511682 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45bbafe-70d2-4437-b3c0-51eba0bfe073-catalog-content\") pod \"community-operators-mt5xv\" (UID: \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\") " pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.511894 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45bbafe-70d2-4437-b3c0-51eba0bfe073-utilities\") pod \"community-operators-mt5xv\" (UID: \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\") " pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.613812 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6b4g\" (UniqueName: \"kubernetes.io/projected/f45bbafe-70d2-4437-b3c0-51eba0bfe073-kube-api-access-n6b4g\") pod \"community-operators-mt5xv\" (UID: \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\") " pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.613867 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45bbafe-70d2-4437-b3c0-51eba0bfe073-catalog-content\") pod \"community-operators-mt5xv\" (UID: \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\") " pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.613934 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45bbafe-70d2-4437-b3c0-51eba0bfe073-utilities\") pod \"community-operators-mt5xv\" (UID: \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\") " pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.614400 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45bbafe-70d2-4437-b3c0-51eba0bfe073-catalog-content\") pod \"community-operators-mt5xv\" (UID: \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\") " pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.614457 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45bbafe-70d2-4437-b3c0-51eba0bfe073-utilities\") pod \"community-operators-mt5xv\" (UID: \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\") " pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.635900 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6b4g\" (UniqueName: \"kubernetes.io/projected/f45bbafe-70d2-4437-b3c0-51eba0bfe073-kube-api-access-n6b4g\") pod \"community-operators-mt5xv\" (UID: \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\") " pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:13 crc kubenswrapper[4756]: I1203 11:46:13.758095 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:14 crc kubenswrapper[4756]: I1203 11:46:14.559499 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mt5xv"] Dec 03 11:46:15 crc kubenswrapper[4756]: I1203 11:46:15.570647 4756 generic.go:334] "Generic (PLEG): container finished" podID="f45bbafe-70d2-4437-b3c0-51eba0bfe073" containerID="6c989573365c8b2947f19d15585fd96abfc69b9c761fc7436cc31460fa8bd296" exitCode=0 Dec 03 11:46:15 crc kubenswrapper[4756]: I1203 11:46:15.570717 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mt5xv" event={"ID":"f45bbafe-70d2-4437-b3c0-51eba0bfe073","Type":"ContainerDied","Data":"6c989573365c8b2947f19d15585fd96abfc69b9c761fc7436cc31460fa8bd296"} Dec 03 11:46:15 crc kubenswrapper[4756]: I1203 11:46:15.571376 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mt5xv" event={"ID":"f45bbafe-70d2-4437-b3c0-51eba0bfe073","Type":"ContainerStarted","Data":"ac15f913cb0b3e9beae141085ae41ec898b8a04e30307bb2464233f92cbbc31f"} Dec 03 11:46:16 crc kubenswrapper[4756]: I1203 11:46:16.583434 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mt5xv" event={"ID":"f45bbafe-70d2-4437-b3c0-51eba0bfe073","Type":"ContainerStarted","Data":"6f32e929493b8f2fdf2b66e106900d7065e002976bf240f6946d6eb61661b36c"} Dec 03 11:46:17 crc kubenswrapper[4756]: I1203 11:46:17.595334 4756 generic.go:334] "Generic (PLEG): container finished" podID="f45bbafe-70d2-4437-b3c0-51eba0bfe073" containerID="6f32e929493b8f2fdf2b66e106900d7065e002976bf240f6946d6eb61661b36c" exitCode=0 Dec 03 11:46:17 crc kubenswrapper[4756]: I1203 11:46:17.595402 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mt5xv" event={"ID":"f45bbafe-70d2-4437-b3c0-51eba0bfe073","Type":"ContainerDied","Data":"6f32e929493b8f2fdf2b66e106900d7065e002976bf240f6946d6eb61661b36c"} Dec 03 11:46:18 crc kubenswrapper[4756]: I1203 11:46:18.609490 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mt5xv" event={"ID":"f45bbafe-70d2-4437-b3c0-51eba0bfe073","Type":"ContainerStarted","Data":"8be294b0bc4c4d998cef691994b7d1d19a2737a340144785204066cbff7917fd"} Dec 03 11:46:18 crc kubenswrapper[4756]: I1203 11:46:18.642557 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mt5xv" podStartSLOduration=2.987436463 podStartE2EDuration="5.642515869s" podCreationTimestamp="2025-12-03 11:46:13 +0000 UTC" firstStartedPulling="2025-12-03 11:46:15.573232729 +0000 UTC m=+3186.603233973" lastFinishedPulling="2025-12-03 11:46:18.228312135 +0000 UTC m=+3189.258313379" observedRunningTime="2025-12-03 11:46:18.634402055 +0000 UTC m=+3189.664403309" watchObservedRunningTime="2025-12-03 11:46:18.642515869 +0000 UTC m=+3189.672517114" Dec 03 11:46:22 crc kubenswrapper[4756]: I1203 11:46:22.235410 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:46:22 crc kubenswrapper[4756]: E1203 11:46:22.236898 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:46:23 crc kubenswrapper[4756]: I1203 11:46:23.759091 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:23 crc kubenswrapper[4756]: I1203 11:46:23.759497 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:23 crc kubenswrapper[4756]: I1203 11:46:23.826590 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:24 crc kubenswrapper[4756]: I1203 11:46:24.711798 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:24 crc kubenswrapper[4756]: I1203 11:46:24.797929 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mt5xv"] Dec 03 11:46:26 crc kubenswrapper[4756]: I1203 11:46:26.684810 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mt5xv" podUID="f45bbafe-70d2-4437-b3c0-51eba0bfe073" containerName="registry-server" containerID="cri-o://8be294b0bc4c4d998cef691994b7d1d19a2737a340144785204066cbff7917fd" gracePeriod=2 Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.254858 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.273668 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45bbafe-70d2-4437-b3c0-51eba0bfe073-catalog-content\") pod \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\" (UID: \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\") " Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.273754 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6b4g\" (UniqueName: \"kubernetes.io/projected/f45bbafe-70d2-4437-b3c0-51eba0bfe073-kube-api-access-n6b4g\") pod \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\" (UID: \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\") " Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.273797 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45bbafe-70d2-4437-b3c0-51eba0bfe073-utilities\") pod \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\" (UID: \"f45bbafe-70d2-4437-b3c0-51eba0bfe073\") " Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.275192 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f45bbafe-70d2-4437-b3c0-51eba0bfe073-utilities" (OuterVolumeSpecName: "utilities") pod "f45bbafe-70d2-4437-b3c0-51eba0bfe073" (UID: "f45bbafe-70d2-4437-b3c0-51eba0bfe073"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.285215 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45bbafe-70d2-4437-b3c0-51eba0bfe073-kube-api-access-n6b4g" (OuterVolumeSpecName: "kube-api-access-n6b4g") pod "f45bbafe-70d2-4437-b3c0-51eba0bfe073" (UID: "f45bbafe-70d2-4437-b3c0-51eba0bfe073"). InnerVolumeSpecName "kube-api-access-n6b4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.376273 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6b4g\" (UniqueName: \"kubernetes.io/projected/f45bbafe-70d2-4437-b3c0-51eba0bfe073-kube-api-access-n6b4g\") on node \"crc\" DevicePath \"\"" Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.376330 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45bbafe-70d2-4437-b3c0-51eba0bfe073-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.700585 4756 generic.go:334] "Generic (PLEG): container finished" podID="f45bbafe-70d2-4437-b3c0-51eba0bfe073" containerID="8be294b0bc4c4d998cef691994b7d1d19a2737a340144785204066cbff7917fd" exitCode=0 Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.700676 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mt5xv" Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.700682 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mt5xv" event={"ID":"f45bbafe-70d2-4437-b3c0-51eba0bfe073","Type":"ContainerDied","Data":"8be294b0bc4c4d998cef691994b7d1d19a2737a340144785204066cbff7917fd"} Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.700722 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mt5xv" event={"ID":"f45bbafe-70d2-4437-b3c0-51eba0bfe073","Type":"ContainerDied","Data":"ac15f913cb0b3e9beae141085ae41ec898b8a04e30307bb2464233f92cbbc31f"} Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.700745 4756 scope.go:117] "RemoveContainer" containerID="8be294b0bc4c4d998cef691994b7d1d19a2737a340144785204066cbff7917fd" Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.738548 4756 scope.go:117] "RemoveContainer" containerID="6f32e929493b8f2fdf2b66e106900d7065e002976bf240f6946d6eb61661b36c" Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.779786 4756 scope.go:117] "RemoveContainer" containerID="6c989573365c8b2947f19d15585fd96abfc69b9c761fc7436cc31460fa8bd296" Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.834462 4756 scope.go:117] "RemoveContainer" containerID="8be294b0bc4c4d998cef691994b7d1d19a2737a340144785204066cbff7917fd" Dec 03 11:46:27 crc kubenswrapper[4756]: E1203 11:46:27.835553 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be294b0bc4c4d998cef691994b7d1d19a2737a340144785204066cbff7917fd\": container with ID starting with 8be294b0bc4c4d998cef691994b7d1d19a2737a340144785204066cbff7917fd not found: ID does not exist" containerID="8be294b0bc4c4d998cef691994b7d1d19a2737a340144785204066cbff7917fd" Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.835634 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be294b0bc4c4d998cef691994b7d1d19a2737a340144785204066cbff7917fd"} err="failed to get container status \"8be294b0bc4c4d998cef691994b7d1d19a2737a340144785204066cbff7917fd\": rpc error: code = NotFound desc = could not find container \"8be294b0bc4c4d998cef691994b7d1d19a2737a340144785204066cbff7917fd\": container with ID starting with 8be294b0bc4c4d998cef691994b7d1d19a2737a340144785204066cbff7917fd not found: ID does not exist" Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.835686 4756 scope.go:117] "RemoveContainer" containerID="6f32e929493b8f2fdf2b66e106900d7065e002976bf240f6946d6eb61661b36c" Dec 03 11:46:27 crc kubenswrapper[4756]: E1203 11:46:27.836260 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f32e929493b8f2fdf2b66e106900d7065e002976bf240f6946d6eb61661b36c\": container with ID starting with 6f32e929493b8f2fdf2b66e106900d7065e002976bf240f6946d6eb61661b36c not found: ID does not exist" containerID="6f32e929493b8f2fdf2b66e106900d7065e002976bf240f6946d6eb61661b36c" Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.836304 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f32e929493b8f2fdf2b66e106900d7065e002976bf240f6946d6eb61661b36c"} err="failed to get container status \"6f32e929493b8f2fdf2b66e106900d7065e002976bf240f6946d6eb61661b36c\": rpc error: code = NotFound desc = could not find container \"6f32e929493b8f2fdf2b66e106900d7065e002976bf240f6946d6eb61661b36c\": container with ID starting with 6f32e929493b8f2fdf2b66e106900d7065e002976bf240f6946d6eb61661b36c not found: ID does not exist" Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.836334 4756 scope.go:117] "RemoveContainer" containerID="6c989573365c8b2947f19d15585fd96abfc69b9c761fc7436cc31460fa8bd296" Dec 03 11:46:27 crc kubenswrapper[4756]: E1203 11:46:27.837003 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c989573365c8b2947f19d15585fd96abfc69b9c761fc7436cc31460fa8bd296\": container with ID starting with 6c989573365c8b2947f19d15585fd96abfc69b9c761fc7436cc31460fa8bd296 not found: ID does not exist" containerID="6c989573365c8b2947f19d15585fd96abfc69b9c761fc7436cc31460fa8bd296" Dec 03 11:46:27 crc kubenswrapper[4756]: I1203 11:46:27.837034 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c989573365c8b2947f19d15585fd96abfc69b9c761fc7436cc31460fa8bd296"} err="failed to get container status \"6c989573365c8b2947f19d15585fd96abfc69b9c761fc7436cc31460fa8bd296\": rpc error: code = NotFound desc = could not find container \"6c989573365c8b2947f19d15585fd96abfc69b9c761fc7436cc31460fa8bd296\": container with ID starting with 6c989573365c8b2947f19d15585fd96abfc69b9c761fc7436cc31460fa8bd296 not found: ID does not exist" Dec 03 11:46:28 crc kubenswrapper[4756]: I1203 11:46:28.098540 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f45bbafe-70d2-4437-b3c0-51eba0bfe073-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f45bbafe-70d2-4437-b3c0-51eba0bfe073" (UID: "f45bbafe-70d2-4437-b3c0-51eba0bfe073"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:46:28 crc kubenswrapper[4756]: I1203 11:46:28.195316 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45bbafe-70d2-4437-b3c0-51eba0bfe073-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:46:28 crc kubenswrapper[4756]: I1203 11:46:28.346785 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mt5xv"] Dec 03 11:46:28 crc kubenswrapper[4756]: I1203 11:46:28.357024 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mt5xv"] Dec 03 11:46:29 crc kubenswrapper[4756]: I1203 11:46:29.255253 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45bbafe-70d2-4437-b3c0-51eba0bfe073" path="/var/lib/kubelet/pods/f45bbafe-70d2-4437-b3c0-51eba0bfe073/volumes" Dec 03 11:46:37 crc kubenswrapper[4756]: I1203 11:46:37.235095 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:46:37 crc kubenswrapper[4756]: E1203 11:46:37.235826 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:46:51 crc kubenswrapper[4756]: I1203 11:46:51.234196 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:46:51 crc kubenswrapper[4756]: E1203 11:46:51.235044 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:47:02 crc kubenswrapper[4756]: I1203 11:47:02.234360 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:47:02 crc kubenswrapper[4756]: E1203 11:47:02.235351 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:47:13 crc kubenswrapper[4756]: I1203 11:47:13.234558 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:47:13 crc kubenswrapper[4756]: E1203 11:47:13.235623 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:47:28 crc kubenswrapper[4756]: I1203 11:47:28.234165 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:47:28 crc kubenswrapper[4756]: E1203 11:47:28.235143 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:47:42 crc kubenswrapper[4756]: I1203 11:47:42.233599 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:47:42 crc kubenswrapper[4756]: E1203 11:47:42.234309 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:47:55 crc kubenswrapper[4756]: I1203 11:47:55.234440 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:47:55 crc kubenswrapper[4756]: E1203 11:47:55.235216 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:48:08 crc kubenswrapper[4756]: I1203 11:48:08.233992 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:48:08 crc kubenswrapper[4756]: E1203 11:48:08.235195 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:48:23 crc kubenswrapper[4756]: I1203 11:48:23.234541 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:48:23 crc kubenswrapper[4756]: I1203 11:48:23.847520 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"967a0cac7d8be3a4092add4ec5b29e6d6de433e8993120957f85da8cf8c1c08f"} Dec 03 11:49:53 crc kubenswrapper[4756]: I1203 11:49:53.911934 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s5gzw"] Dec 03 11:49:53 crc kubenswrapper[4756]: E1203 11:49:53.913054 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45bbafe-70d2-4437-b3c0-51eba0bfe073" containerName="registry-server" Dec 03 11:49:53 crc kubenswrapper[4756]: I1203 11:49:53.913071 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45bbafe-70d2-4437-b3c0-51eba0bfe073" containerName="registry-server" Dec 03 11:49:53 crc kubenswrapper[4756]: E1203 11:49:53.913092 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45bbafe-70d2-4437-b3c0-51eba0bfe073" containerName="extract-utilities" Dec 03 11:49:53 crc kubenswrapper[4756]: I1203 11:49:53.913101 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45bbafe-70d2-4437-b3c0-51eba0bfe073" containerName="extract-utilities" Dec 03 11:49:53 crc kubenswrapper[4756]: E1203 11:49:53.913148 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45bbafe-70d2-4437-b3c0-51eba0bfe073" containerName="extract-content" Dec 03 11:49:53 crc kubenswrapper[4756]: I1203 11:49:53.913158 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45bbafe-70d2-4437-b3c0-51eba0bfe073" containerName="extract-content" Dec 03 11:49:53 crc kubenswrapper[4756]: I1203 11:49:53.913408 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45bbafe-70d2-4437-b3c0-51eba0bfe073" containerName="registry-server" Dec 03 11:49:53 crc kubenswrapper[4756]: I1203 11:49:53.915153 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:49:53 crc kubenswrapper[4756]: I1203 11:49:53.921101 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s5gzw"] Dec 03 11:49:54 crc kubenswrapper[4756]: I1203 11:49:54.058021 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7fa8c2c-9c15-49ff-b476-1006416a311f-utilities\") pod \"redhat-operators-s5gzw\" (UID: \"b7fa8c2c-9c15-49ff-b476-1006416a311f\") " pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:49:54 crc kubenswrapper[4756]: I1203 11:49:54.058091 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8hnm\" (UniqueName: \"kubernetes.io/projected/b7fa8c2c-9c15-49ff-b476-1006416a311f-kube-api-access-h8hnm\") pod \"redhat-operators-s5gzw\" (UID: \"b7fa8c2c-9c15-49ff-b476-1006416a311f\") " pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:49:54 crc kubenswrapper[4756]: I1203 11:49:54.058154 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7fa8c2c-9c15-49ff-b476-1006416a311f-catalog-content\") pod \"redhat-operators-s5gzw\" (UID: \"b7fa8c2c-9c15-49ff-b476-1006416a311f\") " pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:49:54 crc kubenswrapper[4756]: I1203 11:49:54.160484 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7fa8c2c-9c15-49ff-b476-1006416a311f-catalog-content\") pod \"redhat-operators-s5gzw\" (UID: \"b7fa8c2c-9c15-49ff-b476-1006416a311f\") " pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:49:54 crc kubenswrapper[4756]: I1203 11:49:54.160868 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7fa8c2c-9c15-49ff-b476-1006416a311f-utilities\") pod \"redhat-operators-s5gzw\" (UID: \"b7fa8c2c-9c15-49ff-b476-1006416a311f\") " pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:49:54 crc kubenswrapper[4756]: I1203 11:49:54.160910 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8hnm\" (UniqueName: \"kubernetes.io/projected/b7fa8c2c-9c15-49ff-b476-1006416a311f-kube-api-access-h8hnm\") pod \"redhat-operators-s5gzw\" (UID: \"b7fa8c2c-9c15-49ff-b476-1006416a311f\") " pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:49:54 crc kubenswrapper[4756]: I1203 11:49:54.161080 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7fa8c2c-9c15-49ff-b476-1006416a311f-catalog-content\") pod \"redhat-operators-s5gzw\" (UID: \"b7fa8c2c-9c15-49ff-b476-1006416a311f\") " pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:49:54 crc kubenswrapper[4756]: I1203 11:49:54.161422 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7fa8c2c-9c15-49ff-b476-1006416a311f-utilities\") pod \"redhat-operators-s5gzw\" (UID: \"b7fa8c2c-9c15-49ff-b476-1006416a311f\") " pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:49:54 crc kubenswrapper[4756]: I1203 11:49:54.184095 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8hnm\" (UniqueName: \"kubernetes.io/projected/b7fa8c2c-9c15-49ff-b476-1006416a311f-kube-api-access-h8hnm\") pod \"redhat-operators-s5gzw\" (UID: \"b7fa8c2c-9c15-49ff-b476-1006416a311f\") " pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:49:54 crc kubenswrapper[4756]: I1203 11:49:54.237415 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:49:54 crc kubenswrapper[4756]: I1203 11:49:54.722459 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s5gzw"] Dec 03 11:49:55 crc kubenswrapper[4756]: I1203 11:49:55.711868 4756 generic.go:334] "Generic (PLEG): container finished" podID="b7fa8c2c-9c15-49ff-b476-1006416a311f" containerID="00af556f8b2a0aba42fd5fea6981a554378b76a1c03afeec9d1b92bba9bfbf55" exitCode=0 Dec 03 11:49:55 crc kubenswrapper[4756]: I1203 11:49:55.711943 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5gzw" event={"ID":"b7fa8c2c-9c15-49ff-b476-1006416a311f","Type":"ContainerDied","Data":"00af556f8b2a0aba42fd5fea6981a554378b76a1c03afeec9d1b92bba9bfbf55"} Dec 03 11:49:55 crc kubenswrapper[4756]: I1203 11:49:55.712337 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5gzw" event={"ID":"b7fa8c2c-9c15-49ff-b476-1006416a311f","Type":"ContainerStarted","Data":"8b76b5acacd3d96c1bc798c2234c3bb3f948ce66f6fcd78a8e75263219a128c5"} Dec 03 11:49:55 crc kubenswrapper[4756]: I1203 11:49:55.714208 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 11:49:56 crc kubenswrapper[4756]: I1203 11:49:56.721618 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5gzw" event={"ID":"b7fa8c2c-9c15-49ff-b476-1006416a311f","Type":"ContainerStarted","Data":"8808fb4a83acd7e5b0a3a60ece941769e5d2d53322affcb1a1044034078fe4e9"} Dec 03 11:49:58 crc kubenswrapper[4756]: I1203 11:49:58.743765 4756 generic.go:334] "Generic (PLEG): container finished" podID="b7fa8c2c-9c15-49ff-b476-1006416a311f" containerID="8808fb4a83acd7e5b0a3a60ece941769e5d2d53322affcb1a1044034078fe4e9" exitCode=0 Dec 03 11:49:58 crc kubenswrapper[4756]: I1203 11:49:58.743853 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5gzw" event={"ID":"b7fa8c2c-9c15-49ff-b476-1006416a311f","Type":"ContainerDied","Data":"8808fb4a83acd7e5b0a3a60ece941769e5d2d53322affcb1a1044034078fe4e9"} Dec 03 11:49:59 crc kubenswrapper[4756]: I1203 11:49:59.756217 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5gzw" event={"ID":"b7fa8c2c-9c15-49ff-b476-1006416a311f","Type":"ContainerStarted","Data":"e3214ca98e7e57418eb979b5172e220abb9bc34c83280b4353b464770d239c79"} Dec 03 11:49:59 crc kubenswrapper[4756]: I1203 11:49:59.778151 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s5gzw" podStartSLOduration=3.31620645 podStartE2EDuration="6.778129258s" podCreationTimestamp="2025-12-03 11:49:53 +0000 UTC" firstStartedPulling="2025-12-03 11:49:55.713904861 +0000 UTC m=+3406.743906105" lastFinishedPulling="2025-12-03 11:49:59.175827669 +0000 UTC m=+3410.205828913" observedRunningTime="2025-12-03 11:49:59.774901067 +0000 UTC m=+3410.804902321" watchObservedRunningTime="2025-12-03 11:49:59.778129258 +0000 UTC m=+3410.808130502" Dec 03 11:50:04 crc kubenswrapper[4756]: I1203 11:50:04.238516 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:50:04 crc kubenswrapper[4756]: I1203 11:50:04.239118 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:50:05 crc kubenswrapper[4756]: I1203 11:50:05.286434 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s5gzw" podUID="b7fa8c2c-9c15-49ff-b476-1006416a311f" containerName="registry-server" probeResult="failure" output=< Dec 03 11:50:05 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Dec 03 11:50:05 crc kubenswrapper[4756]: > Dec 03 11:50:14 crc kubenswrapper[4756]: I1203 11:50:14.300288 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:50:14 crc kubenswrapper[4756]: I1203 11:50:14.350197 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:50:14 crc kubenswrapper[4756]: I1203 11:50:14.560888 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s5gzw"] Dec 03 11:50:15 crc kubenswrapper[4756]: I1203 11:50:15.893789 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s5gzw" podUID="b7fa8c2c-9c15-49ff-b476-1006416a311f" containerName="registry-server" containerID="cri-o://e3214ca98e7e57418eb979b5172e220abb9bc34c83280b4353b464770d239c79" gracePeriod=2 Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.415260 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.594580 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7fa8c2c-9c15-49ff-b476-1006416a311f-utilities\") pod \"b7fa8c2c-9c15-49ff-b476-1006416a311f\" (UID: \"b7fa8c2c-9c15-49ff-b476-1006416a311f\") " Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.595111 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7fa8c2c-9c15-49ff-b476-1006416a311f-catalog-content\") pod \"b7fa8c2c-9c15-49ff-b476-1006416a311f\" (UID: \"b7fa8c2c-9c15-49ff-b476-1006416a311f\") " Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.595238 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8hnm\" (UniqueName: \"kubernetes.io/projected/b7fa8c2c-9c15-49ff-b476-1006416a311f-kube-api-access-h8hnm\") pod \"b7fa8c2c-9c15-49ff-b476-1006416a311f\" (UID: \"b7fa8c2c-9c15-49ff-b476-1006416a311f\") " Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.595431 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7fa8c2c-9c15-49ff-b476-1006416a311f-utilities" (OuterVolumeSpecName: "utilities") pod "b7fa8c2c-9c15-49ff-b476-1006416a311f" (UID: "b7fa8c2c-9c15-49ff-b476-1006416a311f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.595860 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7fa8c2c-9c15-49ff-b476-1006416a311f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.601821 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7fa8c2c-9c15-49ff-b476-1006416a311f-kube-api-access-h8hnm" (OuterVolumeSpecName: "kube-api-access-h8hnm") pod "b7fa8c2c-9c15-49ff-b476-1006416a311f" (UID: "b7fa8c2c-9c15-49ff-b476-1006416a311f"). InnerVolumeSpecName "kube-api-access-h8hnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.697648 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8hnm\" (UniqueName: \"kubernetes.io/projected/b7fa8c2c-9c15-49ff-b476-1006416a311f-kube-api-access-h8hnm\") on node \"crc\" DevicePath \"\"" Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.712090 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7fa8c2c-9c15-49ff-b476-1006416a311f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7fa8c2c-9c15-49ff-b476-1006416a311f" (UID: "b7fa8c2c-9c15-49ff-b476-1006416a311f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.799183 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7fa8c2c-9c15-49ff-b476-1006416a311f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.902890 4756 generic.go:334] "Generic (PLEG): container finished" podID="b7fa8c2c-9c15-49ff-b476-1006416a311f" containerID="e3214ca98e7e57418eb979b5172e220abb9bc34c83280b4353b464770d239c79" exitCode=0 Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.902941 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5gzw" event={"ID":"b7fa8c2c-9c15-49ff-b476-1006416a311f","Type":"ContainerDied","Data":"e3214ca98e7e57418eb979b5172e220abb9bc34c83280b4353b464770d239c79"} Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.903001 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5gzw" event={"ID":"b7fa8c2c-9c15-49ff-b476-1006416a311f","Type":"ContainerDied","Data":"8b76b5acacd3d96c1bc798c2234c3bb3f948ce66f6fcd78a8e75263219a128c5"} Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.903017 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5gzw" Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.903026 4756 scope.go:117] "RemoveContainer" containerID="e3214ca98e7e57418eb979b5172e220abb9bc34c83280b4353b464770d239c79" Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.929379 4756 scope.go:117] "RemoveContainer" containerID="8808fb4a83acd7e5b0a3a60ece941769e5d2d53322affcb1a1044034078fe4e9" Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.943539 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s5gzw"] Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.951803 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s5gzw"] Dec 03 11:50:16 crc kubenswrapper[4756]: I1203 11:50:16.967445 4756 scope.go:117] "RemoveContainer" containerID="00af556f8b2a0aba42fd5fea6981a554378b76a1c03afeec9d1b92bba9bfbf55" Dec 03 11:50:17 crc kubenswrapper[4756]: I1203 11:50:17.007440 4756 scope.go:117] "RemoveContainer" containerID="e3214ca98e7e57418eb979b5172e220abb9bc34c83280b4353b464770d239c79" Dec 03 11:50:17 crc kubenswrapper[4756]: E1203 11:50:17.008043 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3214ca98e7e57418eb979b5172e220abb9bc34c83280b4353b464770d239c79\": container with ID starting with e3214ca98e7e57418eb979b5172e220abb9bc34c83280b4353b464770d239c79 not found: ID does not exist" containerID="e3214ca98e7e57418eb979b5172e220abb9bc34c83280b4353b464770d239c79" Dec 03 11:50:17 crc kubenswrapper[4756]: I1203 11:50:17.008091 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3214ca98e7e57418eb979b5172e220abb9bc34c83280b4353b464770d239c79"} err="failed to get container status \"e3214ca98e7e57418eb979b5172e220abb9bc34c83280b4353b464770d239c79\": rpc error: code = NotFound desc = could not find container \"e3214ca98e7e57418eb979b5172e220abb9bc34c83280b4353b464770d239c79\": container with ID starting with e3214ca98e7e57418eb979b5172e220abb9bc34c83280b4353b464770d239c79 not found: ID does not exist" Dec 03 11:50:17 crc kubenswrapper[4756]: I1203 11:50:17.008146 4756 scope.go:117] "RemoveContainer" containerID="8808fb4a83acd7e5b0a3a60ece941769e5d2d53322affcb1a1044034078fe4e9" Dec 03 11:50:17 crc kubenswrapper[4756]: E1203 11:50:17.008497 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8808fb4a83acd7e5b0a3a60ece941769e5d2d53322affcb1a1044034078fe4e9\": container with ID starting with 8808fb4a83acd7e5b0a3a60ece941769e5d2d53322affcb1a1044034078fe4e9 not found: ID does not exist" containerID="8808fb4a83acd7e5b0a3a60ece941769e5d2d53322affcb1a1044034078fe4e9" Dec 03 11:50:17 crc kubenswrapper[4756]: I1203 11:50:17.008696 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8808fb4a83acd7e5b0a3a60ece941769e5d2d53322affcb1a1044034078fe4e9"} err="failed to get container status \"8808fb4a83acd7e5b0a3a60ece941769e5d2d53322affcb1a1044034078fe4e9\": rpc error: code = NotFound desc = could not find container \"8808fb4a83acd7e5b0a3a60ece941769e5d2d53322affcb1a1044034078fe4e9\": container with ID starting with 8808fb4a83acd7e5b0a3a60ece941769e5d2d53322affcb1a1044034078fe4e9 not found: ID does not exist" Dec 03 11:50:17 crc kubenswrapper[4756]: I1203 11:50:17.008822 4756 scope.go:117] "RemoveContainer" containerID="00af556f8b2a0aba42fd5fea6981a554378b76a1c03afeec9d1b92bba9bfbf55" Dec 03 11:50:17 crc kubenswrapper[4756]: E1203 11:50:17.009257 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00af556f8b2a0aba42fd5fea6981a554378b76a1c03afeec9d1b92bba9bfbf55\": container with ID starting with 00af556f8b2a0aba42fd5fea6981a554378b76a1c03afeec9d1b92bba9bfbf55 not found: ID does not exist" containerID="00af556f8b2a0aba42fd5fea6981a554378b76a1c03afeec9d1b92bba9bfbf55" Dec 03 11:50:17 crc kubenswrapper[4756]: I1203 11:50:17.009299 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00af556f8b2a0aba42fd5fea6981a554378b76a1c03afeec9d1b92bba9bfbf55"} err="failed to get container status \"00af556f8b2a0aba42fd5fea6981a554378b76a1c03afeec9d1b92bba9bfbf55\": rpc error: code = NotFound desc = could not find container \"00af556f8b2a0aba42fd5fea6981a554378b76a1c03afeec9d1b92bba9bfbf55\": container with ID starting with 00af556f8b2a0aba42fd5fea6981a554378b76a1c03afeec9d1b92bba9bfbf55 not found: ID does not exist" Dec 03 11:50:17 crc kubenswrapper[4756]: I1203 11:50:17.246403 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7fa8c2c-9c15-49ff-b476-1006416a311f" path="/var/lib/kubelet/pods/b7fa8c2c-9c15-49ff-b476-1006416a311f/volumes" Dec 03 11:50:52 crc kubenswrapper[4756]: I1203 11:50:52.607500 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:50:52 crc kubenswrapper[4756]: I1203 11:50:52.608094 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:51:03 crc kubenswrapper[4756]: I1203 11:51:03.729769 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kvfnf"] Dec 03 11:51:03 crc kubenswrapper[4756]: E1203 11:51:03.730843 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7fa8c2c-9c15-49ff-b476-1006416a311f" containerName="extract-utilities" Dec 03 11:51:03 crc kubenswrapper[4756]: I1203 11:51:03.730861 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7fa8c2c-9c15-49ff-b476-1006416a311f" containerName="extract-utilities" Dec 03 11:51:03 crc kubenswrapper[4756]: E1203 11:51:03.730871 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7fa8c2c-9c15-49ff-b476-1006416a311f" containerName="registry-server" Dec 03 11:51:03 crc kubenswrapper[4756]: I1203 11:51:03.730879 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7fa8c2c-9c15-49ff-b476-1006416a311f" containerName="registry-server" Dec 03 11:51:03 crc kubenswrapper[4756]: E1203 11:51:03.730901 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7fa8c2c-9c15-49ff-b476-1006416a311f" containerName="extract-content" Dec 03 11:51:03 crc kubenswrapper[4756]: I1203 11:51:03.730910 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7fa8c2c-9c15-49ff-b476-1006416a311f" containerName="extract-content" Dec 03 11:51:03 crc kubenswrapper[4756]: I1203 11:51:03.731181 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7fa8c2c-9c15-49ff-b476-1006416a311f" containerName="registry-server" Dec 03 11:51:03 crc kubenswrapper[4756]: I1203 11:51:03.736971 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:03 crc kubenswrapper[4756]: I1203 11:51:03.742418 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvfnf"] Dec 03 11:51:03 crc kubenswrapper[4756]: I1203 11:51:03.924283 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-utilities\") pod \"redhat-marketplace-kvfnf\" (UID: \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\") " pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:03 crc kubenswrapper[4756]: I1203 11:51:03.924400 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2vg7\" (UniqueName: \"kubernetes.io/projected/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-kube-api-access-g2vg7\") pod \"redhat-marketplace-kvfnf\" (UID: \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\") " pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:03 crc kubenswrapper[4756]: I1203 11:51:03.924513 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-catalog-content\") pod \"redhat-marketplace-kvfnf\" (UID: \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\") " pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:04 crc kubenswrapper[4756]: I1203 11:51:04.026577 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-catalog-content\") pod \"redhat-marketplace-kvfnf\" (UID: \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\") " pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:04 crc kubenswrapper[4756]: I1203 11:51:04.026724 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-utilities\") pod \"redhat-marketplace-kvfnf\" (UID: \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\") " pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:04 crc kubenswrapper[4756]: I1203 11:51:04.026783 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2vg7\" (UniqueName: \"kubernetes.io/projected/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-kube-api-access-g2vg7\") pod \"redhat-marketplace-kvfnf\" (UID: \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\") " pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:04 crc kubenswrapper[4756]: I1203 11:51:04.027246 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-catalog-content\") pod \"redhat-marketplace-kvfnf\" (UID: \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\") " pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:04 crc kubenswrapper[4756]: I1203 11:51:04.027619 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-utilities\") pod \"redhat-marketplace-kvfnf\" (UID: \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\") " pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:04 crc kubenswrapper[4756]: I1203 11:51:04.051365 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2vg7\" (UniqueName: \"kubernetes.io/projected/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-kube-api-access-g2vg7\") pod \"redhat-marketplace-kvfnf\" (UID: \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\") " pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:04 crc kubenswrapper[4756]: I1203 11:51:04.057339 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:04 crc kubenswrapper[4756]: I1203 11:51:04.516976 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvfnf"] Dec 03 11:51:05 crc kubenswrapper[4756]: I1203 11:51:05.336008 4756 generic.go:334] "Generic (PLEG): container finished" podID="a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" containerID="40b5fc9748558885e79aba90183df4849013b234b37d2338ad1b1d71bd164648" exitCode=0 Dec 03 11:51:05 crc kubenswrapper[4756]: I1203 11:51:05.336295 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvfnf" event={"ID":"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7","Type":"ContainerDied","Data":"40b5fc9748558885e79aba90183df4849013b234b37d2338ad1b1d71bd164648"} Dec 03 11:51:05 crc kubenswrapper[4756]: I1203 11:51:05.336325 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvfnf" event={"ID":"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7","Type":"ContainerStarted","Data":"92565a9f6933d29cd2a79eb6be971b475b5e937c28658c4f121c38d60cd4fa03"} Dec 03 11:51:06 crc kubenswrapper[4756]: I1203 11:51:06.347212 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvfnf" event={"ID":"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7","Type":"ContainerStarted","Data":"62c5686b7c08e574f93096d7f82dab86272ea92345d2d366bff6f35275cdd6fe"} Dec 03 11:51:07 crc kubenswrapper[4756]: I1203 11:51:07.358477 4756 generic.go:334] "Generic (PLEG): container finished" podID="a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" containerID="62c5686b7c08e574f93096d7f82dab86272ea92345d2d366bff6f35275cdd6fe" exitCode=0 Dec 03 11:51:07 crc kubenswrapper[4756]: I1203 11:51:07.358587 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvfnf" event={"ID":"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7","Type":"ContainerDied","Data":"62c5686b7c08e574f93096d7f82dab86272ea92345d2d366bff6f35275cdd6fe"} Dec 03 11:51:08 crc kubenswrapper[4756]: I1203 11:51:08.370245 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvfnf" event={"ID":"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7","Type":"ContainerStarted","Data":"535adb9ce67a0ba4ec9487ad6554883b3b1a6bd1dedab3db6ee548ed1918ab8d"} Dec 03 11:51:08 crc kubenswrapper[4756]: I1203 11:51:08.390196 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kvfnf" podStartSLOduration=2.892679749 podStartE2EDuration="5.390170408s" podCreationTimestamp="2025-12-03 11:51:03 +0000 UTC" firstStartedPulling="2025-12-03 11:51:05.338568622 +0000 UTC m=+3476.368569866" lastFinishedPulling="2025-12-03 11:51:07.836059291 +0000 UTC m=+3478.866060525" observedRunningTime="2025-12-03 11:51:08.389224418 +0000 UTC m=+3479.419225662" watchObservedRunningTime="2025-12-03 11:51:08.390170408 +0000 UTC m=+3479.420171652" Dec 03 11:51:14 crc kubenswrapper[4756]: I1203 11:51:14.057764 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:14 crc kubenswrapper[4756]: I1203 11:51:14.058469 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:14 crc kubenswrapper[4756]: I1203 11:51:14.119763 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:14 crc kubenswrapper[4756]: I1203 11:51:14.496605 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:14 crc kubenswrapper[4756]: I1203 11:51:14.559206 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvfnf"] Dec 03 11:51:16 crc kubenswrapper[4756]: I1203 11:51:16.455499 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kvfnf" podUID="a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" containerName="registry-server" containerID="cri-o://535adb9ce67a0ba4ec9487ad6554883b3b1a6bd1dedab3db6ee548ed1918ab8d" gracePeriod=2 Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.083075 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.194913 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2vg7\" (UniqueName: \"kubernetes.io/projected/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-kube-api-access-g2vg7\") pod \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\" (UID: \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\") " Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.195027 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-catalog-content\") pod \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\" (UID: \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\") " Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.195112 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-utilities\") pod \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\" (UID: \"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7\") " Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.196143 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-utilities" (OuterVolumeSpecName: "utilities") pod "a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" (UID: "a01584e1-fef8-4fd3-ac9f-a030bb8b90d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.201928 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-kube-api-access-g2vg7" (OuterVolumeSpecName: "kube-api-access-g2vg7") pod "a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" (UID: "a01584e1-fef8-4fd3-ac9f-a030bb8b90d7"). InnerVolumeSpecName "kube-api-access-g2vg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.218235 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" (UID: "a01584e1-fef8-4fd3-ac9f-a030bb8b90d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.297548 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2vg7\" (UniqueName: \"kubernetes.io/projected/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-kube-api-access-g2vg7\") on node \"crc\" DevicePath \"\"" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.297993 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.298125 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.468439 4756 generic.go:334] "Generic (PLEG): container finished" podID="a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" containerID="535adb9ce67a0ba4ec9487ad6554883b3b1a6bd1dedab3db6ee548ed1918ab8d" exitCode=0 Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.468474 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvfnf" event={"ID":"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7","Type":"ContainerDied","Data":"535adb9ce67a0ba4ec9487ad6554883b3b1a6bd1dedab3db6ee548ed1918ab8d"} Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.468522 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvfnf" event={"ID":"a01584e1-fef8-4fd3-ac9f-a030bb8b90d7","Type":"ContainerDied","Data":"92565a9f6933d29cd2a79eb6be971b475b5e937c28658c4f121c38d60cd4fa03"} Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.468541 4756 scope.go:117] "RemoveContainer" containerID="535adb9ce67a0ba4ec9487ad6554883b3b1a6bd1dedab3db6ee548ed1918ab8d" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.468569 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvfnf" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.498040 4756 scope.go:117] "RemoveContainer" containerID="62c5686b7c08e574f93096d7f82dab86272ea92345d2d366bff6f35275cdd6fe" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.499004 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvfnf"] Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.513279 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvfnf"] Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.519615 4756 scope.go:117] "RemoveContainer" containerID="40b5fc9748558885e79aba90183df4849013b234b37d2338ad1b1d71bd164648" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.572322 4756 scope.go:117] "RemoveContainer" containerID="535adb9ce67a0ba4ec9487ad6554883b3b1a6bd1dedab3db6ee548ed1918ab8d" Dec 03 11:51:17 crc kubenswrapper[4756]: E1203 11:51:17.572810 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"535adb9ce67a0ba4ec9487ad6554883b3b1a6bd1dedab3db6ee548ed1918ab8d\": container with ID starting with 535adb9ce67a0ba4ec9487ad6554883b3b1a6bd1dedab3db6ee548ed1918ab8d not found: ID does not exist" containerID="535adb9ce67a0ba4ec9487ad6554883b3b1a6bd1dedab3db6ee548ed1918ab8d" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.572847 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535adb9ce67a0ba4ec9487ad6554883b3b1a6bd1dedab3db6ee548ed1918ab8d"} err="failed to get container status \"535adb9ce67a0ba4ec9487ad6554883b3b1a6bd1dedab3db6ee548ed1918ab8d\": rpc error: code = NotFound desc = could not find container \"535adb9ce67a0ba4ec9487ad6554883b3b1a6bd1dedab3db6ee548ed1918ab8d\": container with ID starting with 535adb9ce67a0ba4ec9487ad6554883b3b1a6bd1dedab3db6ee548ed1918ab8d not found: ID does not exist" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.572872 4756 scope.go:117] "RemoveContainer" containerID="62c5686b7c08e574f93096d7f82dab86272ea92345d2d366bff6f35275cdd6fe" Dec 03 11:51:17 crc kubenswrapper[4756]: E1203 11:51:17.573226 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62c5686b7c08e574f93096d7f82dab86272ea92345d2d366bff6f35275cdd6fe\": container with ID starting with 62c5686b7c08e574f93096d7f82dab86272ea92345d2d366bff6f35275cdd6fe not found: ID does not exist" containerID="62c5686b7c08e574f93096d7f82dab86272ea92345d2d366bff6f35275cdd6fe" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.573283 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62c5686b7c08e574f93096d7f82dab86272ea92345d2d366bff6f35275cdd6fe"} err="failed to get container status \"62c5686b7c08e574f93096d7f82dab86272ea92345d2d366bff6f35275cdd6fe\": rpc error: code = NotFound desc = could not find container \"62c5686b7c08e574f93096d7f82dab86272ea92345d2d366bff6f35275cdd6fe\": container with ID starting with 62c5686b7c08e574f93096d7f82dab86272ea92345d2d366bff6f35275cdd6fe not found: ID does not exist" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.573360 4756 scope.go:117] "RemoveContainer" containerID="40b5fc9748558885e79aba90183df4849013b234b37d2338ad1b1d71bd164648" Dec 03 11:51:17 crc kubenswrapper[4756]: E1203 11:51:17.573746 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b5fc9748558885e79aba90183df4849013b234b37d2338ad1b1d71bd164648\": container with ID starting with 40b5fc9748558885e79aba90183df4849013b234b37d2338ad1b1d71bd164648 not found: ID does not exist" containerID="40b5fc9748558885e79aba90183df4849013b234b37d2338ad1b1d71bd164648" Dec 03 11:51:17 crc kubenswrapper[4756]: I1203 11:51:17.573776 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b5fc9748558885e79aba90183df4849013b234b37d2338ad1b1d71bd164648"} err="failed to get container status \"40b5fc9748558885e79aba90183df4849013b234b37d2338ad1b1d71bd164648\": rpc error: code = NotFound desc = could not find container \"40b5fc9748558885e79aba90183df4849013b234b37d2338ad1b1d71bd164648\": container with ID starting with 40b5fc9748558885e79aba90183df4849013b234b37d2338ad1b1d71bd164648 not found: ID does not exist" Dec 03 11:51:19 crc kubenswrapper[4756]: I1203 11:51:19.257558 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" path="/var/lib/kubelet/pods/a01584e1-fef8-4fd3-ac9f-a030bb8b90d7/volumes" Dec 03 11:51:22 crc kubenswrapper[4756]: I1203 11:51:22.606918 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:51:22 crc kubenswrapper[4756]: I1203 11:51:22.607019 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:51:52 crc kubenswrapper[4756]: I1203 11:51:52.607046 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:51:52 crc kubenswrapper[4756]: I1203 11:51:52.608653 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:51:52 crc kubenswrapper[4756]: I1203 11:51:52.608787 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 11:51:52 crc kubenswrapper[4756]: I1203 11:51:52.609753 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"967a0cac7d8be3a4092add4ec5b29e6d6de433e8993120957f85da8cf8c1c08f"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 11:51:52 crc kubenswrapper[4756]: I1203 11:51:52.609908 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://967a0cac7d8be3a4092add4ec5b29e6d6de433e8993120957f85da8cf8c1c08f" gracePeriod=600 Dec 03 11:51:52 crc kubenswrapper[4756]: I1203 11:51:52.812056 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="967a0cac7d8be3a4092add4ec5b29e6d6de433e8993120957f85da8cf8c1c08f" exitCode=0 Dec 03 11:51:52 crc kubenswrapper[4756]: I1203 11:51:52.812251 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"967a0cac7d8be3a4092add4ec5b29e6d6de433e8993120957f85da8cf8c1c08f"} Dec 03 11:51:52 crc kubenswrapper[4756]: I1203 11:51:52.812345 4756 scope.go:117] "RemoveContainer" containerID="c8cdaaa6270f775383199e0aa177cca7ccf4654525ad37d1be84cef8b749a65b" Dec 03 11:51:53 crc kubenswrapper[4756]: I1203 11:51:53.823630 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066"} Dec 03 11:53:52 crc kubenswrapper[4756]: I1203 11:53:52.607556 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:53:52 crc kubenswrapper[4756]: I1203 11:53:52.608312 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:54:22 crc kubenswrapper[4756]: I1203 11:54:22.607222 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:54:22 crc kubenswrapper[4756]: I1203 11:54:22.607843 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:54:52 crc kubenswrapper[4756]: I1203 11:54:52.607069 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 11:54:52 crc kubenswrapper[4756]: I1203 11:54:52.607657 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 11:54:52 crc kubenswrapper[4756]: I1203 11:54:52.607718 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 11:54:52 crc kubenswrapper[4756]: I1203 11:54:52.608761 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 11:54:52 crc kubenswrapper[4756]: I1203 11:54:52.608819 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" gracePeriod=600 Dec 03 11:54:52 crc kubenswrapper[4756]: E1203 11:54:52.741425 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:54:53 crc kubenswrapper[4756]: I1203 11:54:53.665510 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" exitCode=0 Dec 03 11:54:53 crc kubenswrapper[4756]: I1203 11:54:53.665555 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066"} Dec 03 11:54:53 crc kubenswrapper[4756]: I1203 11:54:53.665590 4756 scope.go:117] "RemoveContainer" containerID="967a0cac7d8be3a4092add4ec5b29e6d6de433e8993120957f85da8cf8c1c08f" Dec 03 11:54:53 crc kubenswrapper[4756]: I1203 11:54:53.666482 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:54:53 crc kubenswrapper[4756]: E1203 11:54:53.666882 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:55:07 crc kubenswrapper[4756]: I1203 11:55:07.234022 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:55:07 crc kubenswrapper[4756]: E1203 11:55:07.234793 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:55:19 crc kubenswrapper[4756]: I1203 11:55:19.245845 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:55:19 crc kubenswrapper[4756]: E1203 11:55:19.247321 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:55:33 crc kubenswrapper[4756]: I1203 11:55:33.234633 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:55:33 crc kubenswrapper[4756]: E1203 11:55:33.235735 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:55:45 crc kubenswrapper[4756]: I1203 11:55:45.234837 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:55:45 crc kubenswrapper[4756]: E1203 11:55:45.235934 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:55:57 crc kubenswrapper[4756]: I1203 11:55:57.234436 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:55:57 crc kubenswrapper[4756]: E1203 11:55:57.235207 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:56:12 crc kubenswrapper[4756]: I1203 11:56:12.234010 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:56:12 crc kubenswrapper[4756]: E1203 11:56:12.235918 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:56:25 crc kubenswrapper[4756]: I1203 11:56:25.234647 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:56:25 crc kubenswrapper[4756]: E1203 11:56:25.235588 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:56:38 crc kubenswrapper[4756]: I1203 11:56:38.234520 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:56:38 crc kubenswrapper[4756]: E1203 11:56:38.236702 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:56:51 crc kubenswrapper[4756]: I1203 11:56:51.236924 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:56:51 crc kubenswrapper[4756]: E1203 11:56:51.239035 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:57:01 crc kubenswrapper[4756]: E1203 11:57:01.217649 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9a7368b_5739_4366_8a70_e33f19837e9a.slice/crio-conmon-31a991c557e5be475e3427c18d1f2576c2ed210de3110c9f94ea830dd2e30cf9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9a7368b_5739_4366_8a70_e33f19837e9a.slice/crio-31a991c557e5be475e3427c18d1f2576c2ed210de3110c9f94ea830dd2e30cf9.scope\": RecentStats: unable to find data in memory cache]" Dec 03 11:57:01 crc kubenswrapper[4756]: I1203 11:57:01.781734 4756 generic.go:334] "Generic (PLEG): container finished" podID="f9a7368b-5739-4366-8a70-e33f19837e9a" containerID="31a991c557e5be475e3427c18d1f2576c2ed210de3110c9f94ea830dd2e30cf9" exitCode=0 Dec 03 11:57:01 crc kubenswrapper[4756]: I1203 11:57:01.781784 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f9a7368b-5739-4366-8a70-e33f19837e9a","Type":"ContainerDied","Data":"31a991c557e5be475e3427c18d1f2576c2ed210de3110c9f94ea830dd2e30cf9"} Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.235149 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:57:03 crc kubenswrapper[4756]: E1203 11:57:03.235715 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.358119 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.517941 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f9a7368b-5739-4366-8a70-e33f19837e9a-openstack-config\") pod \"f9a7368b-5739-4366-8a70-e33f19837e9a\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.518099 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-ssh-key\") pod \"f9a7368b-5739-4366-8a70-e33f19837e9a\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.518152 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"f9a7368b-5739-4366-8a70-e33f19837e9a\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.518244 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9a7368b-5739-4366-8a70-e33f19837e9a-config-data\") pod \"f9a7368b-5739-4366-8a70-e33f19837e9a\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.518322 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-ca-certs\") pod \"f9a7368b-5739-4366-8a70-e33f19837e9a\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.518380 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f9a7368b-5739-4366-8a70-e33f19837e9a-test-operator-ephemeral-workdir\") pod \"f9a7368b-5739-4366-8a70-e33f19837e9a\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.518445 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-openstack-config-secret\") pod \"f9a7368b-5739-4366-8a70-e33f19837e9a\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.518530 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl8t9\" (UniqueName: \"kubernetes.io/projected/f9a7368b-5739-4366-8a70-e33f19837e9a-kube-api-access-sl8t9\") pod \"f9a7368b-5739-4366-8a70-e33f19837e9a\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.518683 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f9a7368b-5739-4366-8a70-e33f19837e9a-test-operator-ephemeral-temporary\") pod \"f9a7368b-5739-4366-8a70-e33f19837e9a\" (UID: \"f9a7368b-5739-4366-8a70-e33f19837e9a\") " Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.519057 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a7368b-5739-4366-8a70-e33f19837e9a-config-data" (OuterVolumeSpecName: "config-data") pod "f9a7368b-5739-4366-8a70-e33f19837e9a" (UID: "f9a7368b-5739-4366-8a70-e33f19837e9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.519316 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9a7368b-5739-4366-8a70-e33f19837e9a-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.519692 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a7368b-5739-4366-8a70-e33f19837e9a-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "f9a7368b-5739-4366-8a70-e33f19837e9a" (UID: "f9a7368b-5739-4366-8a70-e33f19837e9a"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.524897 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "f9a7368b-5739-4366-8a70-e33f19837e9a" (UID: "f9a7368b-5739-4366-8a70-e33f19837e9a"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.525128 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a7368b-5739-4366-8a70-e33f19837e9a-kube-api-access-sl8t9" (OuterVolumeSpecName: "kube-api-access-sl8t9") pod "f9a7368b-5739-4366-8a70-e33f19837e9a" (UID: "f9a7368b-5739-4366-8a70-e33f19837e9a"). InnerVolumeSpecName "kube-api-access-sl8t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.525511 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a7368b-5739-4366-8a70-e33f19837e9a-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "f9a7368b-5739-4366-8a70-e33f19837e9a" (UID: "f9a7368b-5739-4366-8a70-e33f19837e9a"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.549076 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "f9a7368b-5739-4366-8a70-e33f19837e9a" (UID: "f9a7368b-5739-4366-8a70-e33f19837e9a"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.552758 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f9a7368b-5739-4366-8a70-e33f19837e9a" (UID: "f9a7368b-5739-4366-8a70-e33f19837e9a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.562744 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "f9a7368b-5739-4366-8a70-e33f19837e9a" (UID: "f9a7368b-5739-4366-8a70-e33f19837e9a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.567947 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a7368b-5739-4366-8a70-e33f19837e9a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f9a7368b-5739-4366-8a70-e33f19837e9a" (UID: "f9a7368b-5739-4366-8a70-e33f19837e9a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.621104 4756 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.621167 4756 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.621185 4756 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.621195 4756 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f9a7368b-5739-4366-8a70-e33f19837e9a-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.621206 4756 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f9a7368b-5739-4366-8a70-e33f19837e9a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.621217 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl8t9\" (UniqueName: \"kubernetes.io/projected/f9a7368b-5739-4366-8a70-e33f19837e9a-kube-api-access-sl8t9\") on node \"crc\" DevicePath \"\"" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.621226 4756 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f9a7368b-5739-4366-8a70-e33f19837e9a-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.621235 4756 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f9a7368b-5739-4366-8a70-e33f19837e9a-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.643333 4756 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.723340 4756 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.801207 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f9a7368b-5739-4366-8a70-e33f19837e9a","Type":"ContainerDied","Data":"1f641ac88202bc17388b25ddcb75bc3d1310c21174651dc8d1030eefcea9b1c4"} Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.801255 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f641ac88202bc17388b25ddcb75bc3d1310c21174651dc8d1030eefcea9b1c4" Dec 03 11:57:03 crc kubenswrapper[4756]: I1203 11:57:03.801269 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.262607 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 11:57:07 crc kubenswrapper[4756]: E1203 11:57:07.263611 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" containerName="extract-utilities" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.263627 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" containerName="extract-utilities" Dec 03 11:57:07 crc kubenswrapper[4756]: E1203 11:57:07.263654 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" containerName="registry-server" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.263660 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" containerName="registry-server" Dec 03 11:57:07 crc kubenswrapper[4756]: E1203 11:57:07.263673 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" containerName="extract-content" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.263679 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" containerName="extract-content" Dec 03 11:57:07 crc kubenswrapper[4756]: E1203 11:57:07.263694 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a7368b-5739-4366-8a70-e33f19837e9a" containerName="tempest-tests-tempest-tests-runner" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.263700 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a7368b-5739-4366-8a70-e33f19837e9a" containerName="tempest-tests-tempest-tests-runner" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.263903 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a7368b-5739-4366-8a70-e33f19837e9a" containerName="tempest-tests-tempest-tests-runner" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.263978 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01584e1-fef8-4fd3-ac9f-a030bb8b90d7" containerName="registry-server" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.264646 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.267613 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-cxqc5" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.270881 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.400930 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9103e77-3037-4d26-946a-822bdd2ba611\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.401041 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4c6g\" (UniqueName: \"kubernetes.io/projected/b9103e77-3037-4d26-946a-822bdd2ba611-kube-api-access-s4c6g\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9103e77-3037-4d26-946a-822bdd2ba611\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.502548 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9103e77-3037-4d26-946a-822bdd2ba611\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.503019 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9103e77-3037-4d26-946a-822bdd2ba611\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.503364 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4c6g\" (UniqueName: \"kubernetes.io/projected/b9103e77-3037-4d26-946a-822bdd2ba611-kube-api-access-s4c6g\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9103e77-3037-4d26-946a-822bdd2ba611\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.528652 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4c6g\" (UniqueName: \"kubernetes.io/projected/b9103e77-3037-4d26-946a-822bdd2ba611-kube-api-access-s4c6g\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9103e77-3037-4d26-946a-822bdd2ba611\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.531883 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9103e77-3037-4d26-946a-822bdd2ba611\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 11:57:07 crc kubenswrapper[4756]: I1203 11:57:07.602927 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 11:57:08 crc kubenswrapper[4756]: I1203 11:57:08.137396 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 11:57:08 crc kubenswrapper[4756]: I1203 11:57:08.148545 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 11:57:08 crc kubenswrapper[4756]: I1203 11:57:08.864552 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b9103e77-3037-4d26-946a-822bdd2ba611","Type":"ContainerStarted","Data":"252ddf458bad1547ee2dc7df1390809212381cd0a16274d23346502f948157b2"} Dec 03 11:57:12 crc kubenswrapper[4756]: I1203 11:57:12.901760 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b9103e77-3037-4d26-946a-822bdd2ba611","Type":"ContainerStarted","Data":"7b86b702b256ac44ff46305291c90d4bbec57676d1e8aef1bce9ac7cbb16b721"} Dec 03 11:57:12 crc kubenswrapper[4756]: I1203 11:57:12.922240 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.013219045 podStartE2EDuration="5.922215815s" podCreationTimestamp="2025-12-03 11:57:07 +0000 UTC" firstStartedPulling="2025-12-03 11:57:08.148313636 +0000 UTC m=+3839.178314880" lastFinishedPulling="2025-12-03 11:57:12.057310406 +0000 UTC m=+3843.087311650" observedRunningTime="2025-12-03 11:57:12.917556219 +0000 UTC m=+3843.947557473" watchObservedRunningTime="2025-12-03 11:57:12.922215815 +0000 UTC m=+3843.952217059" Dec 03 11:57:17 crc kubenswrapper[4756]: I1203 11:57:17.234252 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:57:17 crc kubenswrapper[4756]: E1203 11:57:17.235156 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:57:32 crc kubenswrapper[4756]: I1203 11:57:32.233899 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:57:32 crc kubenswrapper[4756]: E1203 11:57:32.234996 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:57:39 crc kubenswrapper[4756]: I1203 11:57:39.134710 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9cbqb/must-gather-2wp67"] Dec 03 11:57:39 crc kubenswrapper[4756]: I1203 11:57:39.137147 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/must-gather-2wp67" Dec 03 11:57:39 crc kubenswrapper[4756]: I1203 11:57:39.140613 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9cbqb"/"kube-root-ca.crt" Dec 03 11:57:39 crc kubenswrapper[4756]: I1203 11:57:39.140870 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9cbqb"/"openshift-service-ca.crt" Dec 03 11:57:39 crc kubenswrapper[4756]: I1203 11:57:39.164085 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9cbqb/must-gather-2wp67"] Dec 03 11:57:39 crc kubenswrapper[4756]: I1203 11:57:39.265723 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcl9d\" (UniqueName: \"kubernetes.io/projected/c21f54f7-f69f-4d1a-9dcf-4bf45a638db0-kube-api-access-dcl9d\") pod \"must-gather-2wp67\" (UID: \"c21f54f7-f69f-4d1a-9dcf-4bf45a638db0\") " pod="openshift-must-gather-9cbqb/must-gather-2wp67" Dec 03 11:57:39 crc kubenswrapper[4756]: I1203 11:57:39.266253 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c21f54f7-f69f-4d1a-9dcf-4bf45a638db0-must-gather-output\") pod \"must-gather-2wp67\" (UID: \"c21f54f7-f69f-4d1a-9dcf-4bf45a638db0\") " pod="openshift-must-gather-9cbqb/must-gather-2wp67" Dec 03 11:57:39 crc kubenswrapper[4756]: I1203 11:57:39.368228 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c21f54f7-f69f-4d1a-9dcf-4bf45a638db0-must-gather-output\") pod \"must-gather-2wp67\" (UID: \"c21f54f7-f69f-4d1a-9dcf-4bf45a638db0\") " pod="openshift-must-gather-9cbqb/must-gather-2wp67" Dec 03 11:57:39 crc kubenswrapper[4756]: I1203 11:57:39.368463 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcl9d\" (UniqueName: \"kubernetes.io/projected/c21f54f7-f69f-4d1a-9dcf-4bf45a638db0-kube-api-access-dcl9d\") pod \"must-gather-2wp67\" (UID: \"c21f54f7-f69f-4d1a-9dcf-4bf45a638db0\") " pod="openshift-must-gather-9cbqb/must-gather-2wp67" Dec 03 11:57:39 crc kubenswrapper[4756]: I1203 11:57:39.368882 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c21f54f7-f69f-4d1a-9dcf-4bf45a638db0-must-gather-output\") pod \"must-gather-2wp67\" (UID: \"c21f54f7-f69f-4d1a-9dcf-4bf45a638db0\") " pod="openshift-must-gather-9cbqb/must-gather-2wp67" Dec 03 11:57:39 crc kubenswrapper[4756]: I1203 11:57:39.388709 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcl9d\" (UniqueName: \"kubernetes.io/projected/c21f54f7-f69f-4d1a-9dcf-4bf45a638db0-kube-api-access-dcl9d\") pod \"must-gather-2wp67\" (UID: \"c21f54f7-f69f-4d1a-9dcf-4bf45a638db0\") " pod="openshift-must-gather-9cbqb/must-gather-2wp67" Dec 03 11:57:39 crc kubenswrapper[4756]: I1203 11:57:39.461124 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/must-gather-2wp67" Dec 03 11:57:39 crc kubenswrapper[4756]: I1203 11:57:39.952019 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9cbqb/must-gather-2wp67"] Dec 03 11:57:40 crc kubenswrapper[4756]: I1203 11:57:40.150987 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cbqb/must-gather-2wp67" event={"ID":"c21f54f7-f69f-4d1a-9dcf-4bf45a638db0","Type":"ContainerStarted","Data":"b655019f14ec48817b1b772f56a640caddc94b60c42b5b79ae98bb785a960a7e"} Dec 03 11:57:46 crc kubenswrapper[4756]: I1203 11:57:46.233947 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:57:46 crc kubenswrapper[4756]: E1203 11:57:46.234875 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:57:48 crc kubenswrapper[4756]: I1203 11:57:48.230944 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cbqb/must-gather-2wp67" event={"ID":"c21f54f7-f69f-4d1a-9dcf-4bf45a638db0","Type":"ContainerStarted","Data":"2f5c1a60a1b8d20e545c60ec0ec356ec58b38a7b361b3a03d7873a526fa419b2"} Dec 03 11:57:49 crc kubenswrapper[4756]: I1203 11:57:49.253486 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cbqb/must-gather-2wp67" event={"ID":"c21f54f7-f69f-4d1a-9dcf-4bf45a638db0","Type":"ContainerStarted","Data":"059f1b7f238a49bd49b23a9904b125b7b56102a7a96585964e90a012c679c2b6"} Dec 03 11:57:49 crc kubenswrapper[4756]: I1203 11:57:49.286790 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9cbqb/must-gather-2wp67" podStartSLOduration=2.752114558 podStartE2EDuration="10.286770088s" podCreationTimestamp="2025-12-03 11:57:39 +0000 UTC" firstStartedPulling="2025-12-03 11:57:39.954353576 +0000 UTC m=+3870.984354820" lastFinishedPulling="2025-12-03 11:57:47.489009106 +0000 UTC m=+3878.519010350" observedRunningTime="2025-12-03 11:57:49.278153607 +0000 UTC m=+3880.308154851" watchObservedRunningTime="2025-12-03 11:57:49.286770088 +0000 UTC m=+3880.316771332" Dec 03 11:57:53 crc kubenswrapper[4756]: I1203 11:57:53.477631 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9cbqb/crc-debug-gmxf7"] Dec 03 11:57:53 crc kubenswrapper[4756]: I1203 11:57:53.479823 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/crc-debug-gmxf7" Dec 03 11:57:53 crc kubenswrapper[4756]: I1203 11:57:53.482146 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9cbqb"/"default-dockercfg-fnknc" Dec 03 11:57:53 crc kubenswrapper[4756]: I1203 11:57:53.578561 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc146a18-b465-47d2-aa7c-44dd3c13d0d9-host\") pod \"crc-debug-gmxf7\" (UID: \"bc146a18-b465-47d2-aa7c-44dd3c13d0d9\") " pod="openshift-must-gather-9cbqb/crc-debug-gmxf7" Dec 03 11:57:53 crc kubenswrapper[4756]: I1203 11:57:53.578846 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrkz4\" (UniqueName: \"kubernetes.io/projected/bc146a18-b465-47d2-aa7c-44dd3c13d0d9-kube-api-access-zrkz4\") pod \"crc-debug-gmxf7\" (UID: \"bc146a18-b465-47d2-aa7c-44dd3c13d0d9\") " pod="openshift-must-gather-9cbqb/crc-debug-gmxf7" Dec 03 11:57:53 crc kubenswrapper[4756]: I1203 11:57:53.680207 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc146a18-b465-47d2-aa7c-44dd3c13d0d9-host\") pod \"crc-debug-gmxf7\" (UID: \"bc146a18-b465-47d2-aa7c-44dd3c13d0d9\") " pod="openshift-must-gather-9cbqb/crc-debug-gmxf7" Dec 03 11:57:53 crc kubenswrapper[4756]: I1203 11:57:53.680314 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrkz4\" (UniqueName: \"kubernetes.io/projected/bc146a18-b465-47d2-aa7c-44dd3c13d0d9-kube-api-access-zrkz4\") pod \"crc-debug-gmxf7\" (UID: \"bc146a18-b465-47d2-aa7c-44dd3c13d0d9\") " pod="openshift-must-gather-9cbqb/crc-debug-gmxf7" Dec 03 11:57:53 crc kubenswrapper[4756]: I1203 11:57:53.680380 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc146a18-b465-47d2-aa7c-44dd3c13d0d9-host\") pod \"crc-debug-gmxf7\" (UID: \"bc146a18-b465-47d2-aa7c-44dd3c13d0d9\") " pod="openshift-must-gather-9cbqb/crc-debug-gmxf7" Dec 03 11:57:53 crc kubenswrapper[4756]: I1203 11:57:53.703594 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrkz4\" (UniqueName: \"kubernetes.io/projected/bc146a18-b465-47d2-aa7c-44dd3c13d0d9-kube-api-access-zrkz4\") pod \"crc-debug-gmxf7\" (UID: \"bc146a18-b465-47d2-aa7c-44dd3c13d0d9\") " pod="openshift-must-gather-9cbqb/crc-debug-gmxf7" Dec 03 11:57:53 crc kubenswrapper[4756]: I1203 11:57:53.801426 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/crc-debug-gmxf7" Dec 03 11:57:54 crc kubenswrapper[4756]: I1203 11:57:54.297127 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cbqb/crc-debug-gmxf7" event={"ID":"bc146a18-b465-47d2-aa7c-44dd3c13d0d9","Type":"ContainerStarted","Data":"202b8d10a6f144240b9399638cc15af62fc24faf2227479479926d1d5bad6876"} Dec 03 11:57:59 crc kubenswrapper[4756]: I1203 11:57:59.469520 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:57:59 crc kubenswrapper[4756]: E1203 11:57:59.470469 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:58:09 crc kubenswrapper[4756]: I1203 11:58:09.614059 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cbqb/crc-debug-gmxf7" event={"ID":"bc146a18-b465-47d2-aa7c-44dd3c13d0d9","Type":"ContainerStarted","Data":"c7049b81ed94164aca4b79d35efdcd8df6678b3db51fbeb7e034886c2804e0aa"} Dec 03 11:58:09 crc kubenswrapper[4756]: I1203 11:58:09.630192 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9cbqb/crc-debug-gmxf7" podStartSLOduration=1.551219986 podStartE2EDuration="16.63016943s" podCreationTimestamp="2025-12-03 11:57:53 +0000 UTC" firstStartedPulling="2025-12-03 11:57:53.852026083 +0000 UTC m=+3884.882027327" lastFinishedPulling="2025-12-03 11:58:08.930975517 +0000 UTC m=+3899.960976771" observedRunningTime="2025-12-03 11:58:09.626113313 +0000 UTC m=+3900.656114587" watchObservedRunningTime="2025-12-03 11:58:09.63016943 +0000 UTC m=+3900.660170664" Dec 03 11:58:12 crc kubenswrapper[4756]: I1203 11:58:12.234198 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:58:12 crc kubenswrapper[4756]: E1203 11:58:12.236819 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:58:25 crc kubenswrapper[4756]: I1203 11:58:25.235192 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:58:25 crc kubenswrapper[4756]: E1203 11:58:25.236069 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:58:37 crc kubenswrapper[4756]: I1203 11:58:37.234756 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:58:37 crc kubenswrapper[4756]: E1203 11:58:37.235620 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:58:48 crc kubenswrapper[4756]: I1203 11:58:48.234114 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:58:48 crc kubenswrapper[4756]: E1203 11:58:48.234893 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:58:58 crc kubenswrapper[4756]: I1203 11:58:58.285731 4756 generic.go:334] "Generic (PLEG): container finished" podID="bc146a18-b465-47d2-aa7c-44dd3c13d0d9" containerID="c7049b81ed94164aca4b79d35efdcd8df6678b3db51fbeb7e034886c2804e0aa" exitCode=0 Dec 03 11:58:58 crc kubenswrapper[4756]: I1203 11:58:58.285842 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cbqb/crc-debug-gmxf7" event={"ID":"bc146a18-b465-47d2-aa7c-44dd3c13d0d9","Type":"ContainerDied","Data":"c7049b81ed94164aca4b79d35efdcd8df6678b3db51fbeb7e034886c2804e0aa"} Dec 03 11:58:59 crc kubenswrapper[4756]: I1203 11:58:59.414357 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/crc-debug-gmxf7" Dec 03 11:58:59 crc kubenswrapper[4756]: I1203 11:58:59.447882 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9cbqb/crc-debug-gmxf7"] Dec 03 11:58:59 crc kubenswrapper[4756]: I1203 11:58:59.457143 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9cbqb/crc-debug-gmxf7"] Dec 03 11:58:59 crc kubenswrapper[4756]: I1203 11:58:59.558939 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc146a18-b465-47d2-aa7c-44dd3c13d0d9-host\") pod \"bc146a18-b465-47d2-aa7c-44dd3c13d0d9\" (UID: \"bc146a18-b465-47d2-aa7c-44dd3c13d0d9\") " Dec 03 11:58:59 crc kubenswrapper[4756]: I1203 11:58:59.559039 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc146a18-b465-47d2-aa7c-44dd3c13d0d9-host" (OuterVolumeSpecName: "host") pod "bc146a18-b465-47d2-aa7c-44dd3c13d0d9" (UID: "bc146a18-b465-47d2-aa7c-44dd3c13d0d9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:58:59 crc kubenswrapper[4756]: I1203 11:58:59.559084 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrkz4\" (UniqueName: \"kubernetes.io/projected/bc146a18-b465-47d2-aa7c-44dd3c13d0d9-kube-api-access-zrkz4\") pod \"bc146a18-b465-47d2-aa7c-44dd3c13d0d9\" (UID: \"bc146a18-b465-47d2-aa7c-44dd3c13d0d9\") " Dec 03 11:58:59 crc kubenswrapper[4756]: I1203 11:58:59.560563 4756 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bc146a18-b465-47d2-aa7c-44dd3c13d0d9-host\") on node \"crc\" DevicePath \"\"" Dec 03 11:58:59 crc kubenswrapper[4756]: I1203 11:58:59.578662 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc146a18-b465-47d2-aa7c-44dd3c13d0d9-kube-api-access-zrkz4" (OuterVolumeSpecName: "kube-api-access-zrkz4") pod "bc146a18-b465-47d2-aa7c-44dd3c13d0d9" (UID: "bc146a18-b465-47d2-aa7c-44dd3c13d0d9"). InnerVolumeSpecName "kube-api-access-zrkz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:58:59 crc kubenswrapper[4756]: I1203 11:58:59.665098 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrkz4\" (UniqueName: \"kubernetes.io/projected/bc146a18-b465-47d2-aa7c-44dd3c13d0d9-kube-api-access-zrkz4\") on node \"crc\" DevicePath \"\"" Dec 03 11:59:00 crc kubenswrapper[4756]: I1203 11:59:00.307137 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="202b8d10a6f144240b9399638cc15af62fc24faf2227479479926d1d5bad6876" Dec 03 11:59:00 crc kubenswrapper[4756]: I1203 11:59:00.307185 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/crc-debug-gmxf7" Dec 03 11:59:00 crc kubenswrapper[4756]: I1203 11:59:00.646543 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9cbqb/crc-debug-vqm7m"] Dec 03 11:59:00 crc kubenswrapper[4756]: E1203 11:59:00.647515 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc146a18-b465-47d2-aa7c-44dd3c13d0d9" containerName="container-00" Dec 03 11:59:00 crc kubenswrapper[4756]: I1203 11:59:00.647552 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc146a18-b465-47d2-aa7c-44dd3c13d0d9" containerName="container-00" Dec 03 11:59:00 crc kubenswrapper[4756]: I1203 11:59:00.648161 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc146a18-b465-47d2-aa7c-44dd3c13d0d9" containerName="container-00" Dec 03 11:59:00 crc kubenswrapper[4756]: I1203 11:59:00.650878 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/crc-debug-vqm7m" Dec 03 11:59:00 crc kubenswrapper[4756]: I1203 11:59:00.654428 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9cbqb"/"default-dockercfg-fnknc" Dec 03 11:59:00 crc kubenswrapper[4756]: I1203 11:59:00.787046 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5thvs\" (UniqueName: \"kubernetes.io/projected/179a12bb-ecc4-43b4-8d2a-b9197af44531-kube-api-access-5thvs\") pod \"crc-debug-vqm7m\" (UID: \"179a12bb-ecc4-43b4-8d2a-b9197af44531\") " pod="openshift-must-gather-9cbqb/crc-debug-vqm7m" Dec 03 11:59:00 crc kubenswrapper[4756]: I1203 11:59:00.787537 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/179a12bb-ecc4-43b4-8d2a-b9197af44531-host\") pod \"crc-debug-vqm7m\" (UID: \"179a12bb-ecc4-43b4-8d2a-b9197af44531\") " pod="openshift-must-gather-9cbqb/crc-debug-vqm7m" Dec 03 11:59:00 crc kubenswrapper[4756]: I1203 11:59:00.889535 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/179a12bb-ecc4-43b4-8d2a-b9197af44531-host\") pod \"crc-debug-vqm7m\" (UID: \"179a12bb-ecc4-43b4-8d2a-b9197af44531\") " pod="openshift-must-gather-9cbqb/crc-debug-vqm7m" Dec 03 11:59:00 crc kubenswrapper[4756]: I1203 11:59:00.889669 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5thvs\" (UniqueName: \"kubernetes.io/projected/179a12bb-ecc4-43b4-8d2a-b9197af44531-kube-api-access-5thvs\") pod \"crc-debug-vqm7m\" (UID: \"179a12bb-ecc4-43b4-8d2a-b9197af44531\") " pod="openshift-must-gather-9cbqb/crc-debug-vqm7m" Dec 03 11:59:00 crc kubenswrapper[4756]: I1203 11:59:00.889760 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/179a12bb-ecc4-43b4-8d2a-b9197af44531-host\") pod \"crc-debug-vqm7m\" (UID: \"179a12bb-ecc4-43b4-8d2a-b9197af44531\") " pod="openshift-must-gather-9cbqb/crc-debug-vqm7m" Dec 03 11:59:00 crc kubenswrapper[4756]: I1203 11:59:00.928265 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5thvs\" (UniqueName: \"kubernetes.io/projected/179a12bb-ecc4-43b4-8d2a-b9197af44531-kube-api-access-5thvs\") pod \"crc-debug-vqm7m\" (UID: \"179a12bb-ecc4-43b4-8d2a-b9197af44531\") " pod="openshift-must-gather-9cbqb/crc-debug-vqm7m" Dec 03 11:59:00 crc kubenswrapper[4756]: I1203 11:59:00.975934 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/crc-debug-vqm7m" Dec 03 11:59:01 crc kubenswrapper[4756]: I1203 11:59:01.255746 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc146a18-b465-47d2-aa7c-44dd3c13d0d9" path="/var/lib/kubelet/pods/bc146a18-b465-47d2-aa7c-44dd3c13d0d9/volumes" Dec 03 11:59:01 crc kubenswrapper[4756]: I1203 11:59:01.323119 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cbqb/crc-debug-vqm7m" event={"ID":"179a12bb-ecc4-43b4-8d2a-b9197af44531","Type":"ContainerStarted","Data":"d299682362429c92b3a9f003491a1c8a5d628f3eace159a7d96227f368204fb5"} Dec 03 11:59:02 crc kubenswrapper[4756]: I1203 11:59:02.334695 4756 generic.go:334] "Generic (PLEG): container finished" podID="179a12bb-ecc4-43b4-8d2a-b9197af44531" containerID="c19cef0b7f2383f263a2e129b7c6007e87a21f6150c24bc446685199bf7e0202" exitCode=0 Dec 03 11:59:02 crc kubenswrapper[4756]: I1203 11:59:02.335139 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cbqb/crc-debug-vqm7m" event={"ID":"179a12bb-ecc4-43b4-8d2a-b9197af44531","Type":"ContainerDied","Data":"c19cef0b7f2383f263a2e129b7c6007e87a21f6150c24bc446685199bf7e0202"} Dec 03 11:59:02 crc kubenswrapper[4756]: I1203 11:59:02.810901 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9cbqb/crc-debug-vqm7m"] Dec 03 11:59:02 crc kubenswrapper[4756]: I1203 11:59:02.821284 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9cbqb/crc-debug-vqm7m"] Dec 03 11:59:03 crc kubenswrapper[4756]: I1203 11:59:03.234126 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:59:03 crc kubenswrapper[4756]: E1203 11:59:03.234404 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:59:03 crc kubenswrapper[4756]: I1203 11:59:03.450764 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/crc-debug-vqm7m" Dec 03 11:59:03 crc kubenswrapper[4756]: I1203 11:59:03.538122 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/179a12bb-ecc4-43b4-8d2a-b9197af44531-host\") pod \"179a12bb-ecc4-43b4-8d2a-b9197af44531\" (UID: \"179a12bb-ecc4-43b4-8d2a-b9197af44531\") " Dec 03 11:59:03 crc kubenswrapper[4756]: I1203 11:59:03.538324 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5thvs\" (UniqueName: \"kubernetes.io/projected/179a12bb-ecc4-43b4-8d2a-b9197af44531-kube-api-access-5thvs\") pod \"179a12bb-ecc4-43b4-8d2a-b9197af44531\" (UID: \"179a12bb-ecc4-43b4-8d2a-b9197af44531\") " Dec 03 11:59:03 crc kubenswrapper[4756]: I1203 11:59:03.538730 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/179a12bb-ecc4-43b4-8d2a-b9197af44531-host" (OuterVolumeSpecName: "host") pod "179a12bb-ecc4-43b4-8d2a-b9197af44531" (UID: "179a12bb-ecc4-43b4-8d2a-b9197af44531"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:59:03 crc kubenswrapper[4756]: I1203 11:59:03.539032 4756 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/179a12bb-ecc4-43b4-8d2a-b9197af44531-host\") on node \"crc\" DevicePath \"\"" Dec 03 11:59:03 crc kubenswrapper[4756]: I1203 11:59:03.544735 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/179a12bb-ecc4-43b4-8d2a-b9197af44531-kube-api-access-5thvs" (OuterVolumeSpecName: "kube-api-access-5thvs") pod "179a12bb-ecc4-43b4-8d2a-b9197af44531" (UID: "179a12bb-ecc4-43b4-8d2a-b9197af44531"). InnerVolumeSpecName "kube-api-access-5thvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:59:03 crc kubenswrapper[4756]: I1203 11:59:03.641143 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5thvs\" (UniqueName: \"kubernetes.io/projected/179a12bb-ecc4-43b4-8d2a-b9197af44531-kube-api-access-5thvs\") on node \"crc\" DevicePath \"\"" Dec 03 11:59:03 crc kubenswrapper[4756]: I1203 11:59:03.960330 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9cbqb/crc-debug-pvb2t"] Dec 03 11:59:03 crc kubenswrapper[4756]: E1203 11:59:03.961015 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179a12bb-ecc4-43b4-8d2a-b9197af44531" containerName="container-00" Dec 03 11:59:03 crc kubenswrapper[4756]: I1203 11:59:03.961103 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="179a12bb-ecc4-43b4-8d2a-b9197af44531" containerName="container-00" Dec 03 11:59:03 crc kubenswrapper[4756]: I1203 11:59:03.961388 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="179a12bb-ecc4-43b4-8d2a-b9197af44531" containerName="container-00" Dec 03 11:59:03 crc kubenswrapper[4756]: I1203 11:59:03.962100 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/crc-debug-pvb2t" Dec 03 11:59:04 crc kubenswrapper[4756]: I1203 11:59:04.048826 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4jrc\" (UniqueName: \"kubernetes.io/projected/d8f4f101-39cd-469b-b8df-05fca1863626-kube-api-access-v4jrc\") pod \"crc-debug-pvb2t\" (UID: \"d8f4f101-39cd-469b-b8df-05fca1863626\") " pod="openshift-must-gather-9cbqb/crc-debug-pvb2t" Dec 03 11:59:04 crc kubenswrapper[4756]: I1203 11:59:04.049145 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8f4f101-39cd-469b-b8df-05fca1863626-host\") pod \"crc-debug-pvb2t\" (UID: \"d8f4f101-39cd-469b-b8df-05fca1863626\") " pod="openshift-must-gather-9cbqb/crc-debug-pvb2t" Dec 03 11:59:04 crc kubenswrapper[4756]: I1203 11:59:04.151426 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4jrc\" (UniqueName: \"kubernetes.io/projected/d8f4f101-39cd-469b-b8df-05fca1863626-kube-api-access-v4jrc\") pod \"crc-debug-pvb2t\" (UID: \"d8f4f101-39cd-469b-b8df-05fca1863626\") " pod="openshift-must-gather-9cbqb/crc-debug-pvb2t" Dec 03 11:59:04 crc kubenswrapper[4756]: I1203 11:59:04.151595 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8f4f101-39cd-469b-b8df-05fca1863626-host\") pod \"crc-debug-pvb2t\" (UID: \"d8f4f101-39cd-469b-b8df-05fca1863626\") " pod="openshift-must-gather-9cbqb/crc-debug-pvb2t" Dec 03 11:59:04 crc kubenswrapper[4756]: I1203 11:59:04.151721 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8f4f101-39cd-469b-b8df-05fca1863626-host\") pod \"crc-debug-pvb2t\" (UID: \"d8f4f101-39cd-469b-b8df-05fca1863626\") " pod="openshift-must-gather-9cbqb/crc-debug-pvb2t" Dec 03 11:59:04 crc kubenswrapper[4756]: I1203 11:59:04.169384 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4jrc\" (UniqueName: \"kubernetes.io/projected/d8f4f101-39cd-469b-b8df-05fca1863626-kube-api-access-v4jrc\") pod \"crc-debug-pvb2t\" (UID: \"d8f4f101-39cd-469b-b8df-05fca1863626\") " pod="openshift-must-gather-9cbqb/crc-debug-pvb2t" Dec 03 11:59:04 crc kubenswrapper[4756]: I1203 11:59:04.280789 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/crc-debug-pvb2t" Dec 03 11:59:04 crc kubenswrapper[4756]: I1203 11:59:04.358664 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cbqb/crc-debug-pvb2t" event={"ID":"d8f4f101-39cd-469b-b8df-05fca1863626","Type":"ContainerStarted","Data":"aac763351d2f8335449f252d1fdcbc3503bcb6fd53c4bb42602a27d3cdb5810f"} Dec 03 11:59:04 crc kubenswrapper[4756]: I1203 11:59:04.359927 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/crc-debug-vqm7m" Dec 03 11:59:04 crc kubenswrapper[4756]: I1203 11:59:04.359929 4756 scope.go:117] "RemoveContainer" containerID="c19cef0b7f2383f263a2e129b7c6007e87a21f6150c24bc446685199bf7e0202" Dec 03 11:59:05 crc kubenswrapper[4756]: I1203 11:59:05.245587 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="179a12bb-ecc4-43b4-8d2a-b9197af44531" path="/var/lib/kubelet/pods/179a12bb-ecc4-43b4-8d2a-b9197af44531/volumes" Dec 03 11:59:05 crc kubenswrapper[4756]: I1203 11:59:05.368460 4756 generic.go:334] "Generic (PLEG): container finished" podID="d8f4f101-39cd-469b-b8df-05fca1863626" containerID="89f5df73aaf30926ff019e9a7f26ef3649e5463248a30ac6e8dedc7122bf9612" exitCode=0 Dec 03 11:59:05 crc kubenswrapper[4756]: I1203 11:59:05.368538 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cbqb/crc-debug-pvb2t" event={"ID":"d8f4f101-39cd-469b-b8df-05fca1863626","Type":"ContainerDied","Data":"89f5df73aaf30926ff019e9a7f26ef3649e5463248a30ac6e8dedc7122bf9612"} Dec 03 11:59:05 crc kubenswrapper[4756]: I1203 11:59:05.448689 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9cbqb/crc-debug-pvb2t"] Dec 03 11:59:05 crc kubenswrapper[4756]: I1203 11:59:05.459480 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9cbqb/crc-debug-pvb2t"] Dec 03 11:59:06 crc kubenswrapper[4756]: I1203 11:59:06.494513 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/crc-debug-pvb2t" Dec 03 11:59:06 crc kubenswrapper[4756]: I1203 11:59:06.597186 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4jrc\" (UniqueName: \"kubernetes.io/projected/d8f4f101-39cd-469b-b8df-05fca1863626-kube-api-access-v4jrc\") pod \"d8f4f101-39cd-469b-b8df-05fca1863626\" (UID: \"d8f4f101-39cd-469b-b8df-05fca1863626\") " Dec 03 11:59:06 crc kubenswrapper[4756]: I1203 11:59:06.597387 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8f4f101-39cd-469b-b8df-05fca1863626-host\") pod \"d8f4f101-39cd-469b-b8df-05fca1863626\" (UID: \"d8f4f101-39cd-469b-b8df-05fca1863626\") " Dec 03 11:59:06 crc kubenswrapper[4756]: I1203 11:59:06.597577 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8f4f101-39cd-469b-b8df-05fca1863626-host" (OuterVolumeSpecName: "host") pod "d8f4f101-39cd-469b-b8df-05fca1863626" (UID: "d8f4f101-39cd-469b-b8df-05fca1863626"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 11:59:06 crc kubenswrapper[4756]: I1203 11:59:06.597927 4756 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d8f4f101-39cd-469b-b8df-05fca1863626-host\") on node \"crc\" DevicePath \"\"" Dec 03 11:59:06 crc kubenswrapper[4756]: I1203 11:59:06.604655 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f4f101-39cd-469b-b8df-05fca1863626-kube-api-access-v4jrc" (OuterVolumeSpecName: "kube-api-access-v4jrc") pod "d8f4f101-39cd-469b-b8df-05fca1863626" (UID: "d8f4f101-39cd-469b-b8df-05fca1863626"). InnerVolumeSpecName "kube-api-access-v4jrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 11:59:06 crc kubenswrapper[4756]: I1203 11:59:06.699673 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4jrc\" (UniqueName: \"kubernetes.io/projected/d8f4f101-39cd-469b-b8df-05fca1863626-kube-api-access-v4jrc\") on node \"crc\" DevicePath \"\"" Dec 03 11:59:07 crc kubenswrapper[4756]: I1203 11:59:07.245258 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8f4f101-39cd-469b-b8df-05fca1863626" path="/var/lib/kubelet/pods/d8f4f101-39cd-469b-b8df-05fca1863626/volumes" Dec 03 11:59:07 crc kubenswrapper[4756]: I1203 11:59:07.391297 4756 scope.go:117] "RemoveContainer" containerID="89f5df73aaf30926ff019e9a7f26ef3649e5463248a30ac6e8dedc7122bf9612" Dec 03 11:59:07 crc kubenswrapper[4756]: I1203 11:59:07.391407 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/crc-debug-pvb2t" Dec 03 11:59:18 crc kubenswrapper[4756]: I1203 11:59:18.234884 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:59:18 crc kubenswrapper[4756]: E1203 11:59:18.236120 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:59:23 crc kubenswrapper[4756]: I1203 11:59:23.675609 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75c47c8598-6kchw_09b51d51-7a9c-4b73-a277-c488661e4af0/barbican-api/0.log" Dec 03 11:59:23 crc kubenswrapper[4756]: I1203 11:59:23.750118 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75c47c8598-6kchw_09b51d51-7a9c-4b73-a277-c488661e4af0/barbican-api-log/0.log" Dec 03 11:59:23 crc kubenswrapper[4756]: I1203 11:59:23.932207 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6796bdbbcb-s99wh_ffa571bc-b5f1-4b8f-be29-9eae5f21db25/barbican-keystone-listener/0.log" Dec 03 11:59:23 crc kubenswrapper[4756]: I1203 11:59:23.998280 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6796bdbbcb-s99wh_ffa571bc-b5f1-4b8f-be29-9eae5f21db25/barbican-keystone-listener-log/0.log" Dec 03 11:59:24 crc kubenswrapper[4756]: I1203 11:59:24.110294 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-774fbcb69f-lqf5k_2fb0446e-9a35-4390-9290-b539e6a8718e/barbican-worker/0.log" Dec 03 11:59:24 crc kubenswrapper[4756]: I1203 11:59:24.125305 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-774fbcb69f-lqf5k_2fb0446e-9a35-4390-9290-b539e6a8718e/barbican-worker-log/0.log" Dec 03 11:59:24 crc kubenswrapper[4756]: I1203 11:59:24.227168 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b_5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:24 crc kubenswrapper[4756]: I1203 11:59:24.337026 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3610ec2a-6a5b-4cae-9863-e5ab6f3267ed/ceilometer-central-agent/0.log" Dec 03 11:59:24 crc kubenswrapper[4756]: I1203 11:59:24.439353 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3610ec2a-6a5b-4cae-9863-e5ab6f3267ed/ceilometer-notification-agent/0.log" Dec 03 11:59:24 crc kubenswrapper[4756]: I1203 11:59:24.485394 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3610ec2a-6a5b-4cae-9863-e5ab6f3267ed/proxy-httpd/0.log" Dec 03 11:59:24 crc kubenswrapper[4756]: I1203 11:59:24.514256 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3610ec2a-6a5b-4cae-9863-e5ab6f3267ed/sg-core/0.log" Dec 03 11:59:24 crc kubenswrapper[4756]: I1203 11:59:24.691706 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_cada869c-6167-4fd2-b8ad-470d18f09cf4/cinder-api/0.log" Dec 03 11:59:24 crc kubenswrapper[4756]: I1203 11:59:24.711316 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_cada869c-6167-4fd2-b8ad-470d18f09cf4/cinder-api-log/0.log" Dec 03 11:59:24 crc kubenswrapper[4756]: I1203 11:59:24.895352 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ea68c0aa-0cf1-4437-ad1a-988b44fe4032/cinder-scheduler/0.log" Dec 03 11:59:24 crc kubenswrapper[4756]: I1203 11:59:24.962453 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ea68c0aa-0cf1-4437-ad1a-988b44fe4032/probe/0.log" Dec 03 11:59:25 crc kubenswrapper[4756]: I1203 11:59:25.056262 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-662qm_13546056-cc4c-4f52-81ab-79909380facb/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:25 crc kubenswrapper[4756]: I1203 11:59:25.246910 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs_db190d36-8d79-4e74-8ba0-898731235d0a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:25 crc kubenswrapper[4756]: I1203 11:59:25.303337 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-rb9d7_cdfe5594-c723-4251-9daa-64c59d20f048/init/0.log" Dec 03 11:59:25 crc kubenswrapper[4756]: I1203 11:59:25.458573 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-rb9d7_cdfe5594-c723-4251-9daa-64c59d20f048/init/0.log" Dec 03 11:59:25 crc kubenswrapper[4756]: I1203 11:59:25.543505 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-rb9d7_cdfe5594-c723-4251-9daa-64c59d20f048/dnsmasq-dns/0.log" Dec 03 11:59:25 crc kubenswrapper[4756]: I1203 11:59:25.590219 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-6jd54_e9caa0fb-3dd9-4219-9657-e61712c336e8/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:25 crc kubenswrapper[4756]: I1203 11:59:25.713253 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b66578e3-f591-49d8-b76a-1fc8f8f0d262/glance-httpd/0.log" Dec 03 11:59:25 crc kubenswrapper[4756]: I1203 11:59:25.814085 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b66578e3-f591-49d8-b76a-1fc8f8f0d262/glance-log/0.log" Dec 03 11:59:25 crc kubenswrapper[4756]: I1203 11:59:25.939531 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_543ab57c-507b-4266-9105-e7e09e254311/glance-httpd/0.log" Dec 03 11:59:25 crc kubenswrapper[4756]: I1203 11:59:25.957431 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_543ab57c-507b-4266-9105-e7e09e254311/glance-log/0.log" Dec 03 11:59:26 crc kubenswrapper[4756]: I1203 11:59:26.148929 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-66bc647888-tcn4m_00c35a0d-70b4-453d-974a-85b638505280/horizon/1.log" Dec 03 11:59:26 crc kubenswrapper[4756]: I1203 11:59:26.350921 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-66bc647888-tcn4m_00c35a0d-70b4-453d-974a-85b638505280/horizon/0.log" Dec 03 11:59:26 crc kubenswrapper[4756]: I1203 11:59:26.447695 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs_42d479c1-c36c-4481-a4e1-7e155283859d/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:26 crc kubenswrapper[4756]: I1203 11:59:26.580510 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-66bc647888-tcn4m_00c35a0d-70b4-453d-974a-85b638505280/horizon-log/0.log" Dec 03 11:59:26 crc kubenswrapper[4756]: I1203 11:59:26.612458 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rfcxf_d75f87e1-8371-4657-853a-3ad9c89bbc74/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:27 crc kubenswrapper[4756]: I1203 11:59:27.112579 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f5a21d10-a957-4ce1-b804-b75db51fe53c/kube-state-metrics/0.log" Dec 03 11:59:27 crc kubenswrapper[4756]: I1203 11:59:27.392866 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5_23845452-bd2b-4841-b275-0adbc22178c1/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:27 crc kubenswrapper[4756]: I1203 11:59:27.660176 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7c6566fd84-f6lrg_9b1ddcb6-cd3d-438f-a007-a1527ab5be16/keystone-api/0.log" Dec 03 11:59:28 crc kubenswrapper[4756]: I1203 11:59:28.007815 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-745d87f76f-mbrkc_4234afcf-96f0-4340-b5c0-e1aac6c4dacb/neutron-httpd/0.log" Dec 03 11:59:28 crc kubenswrapper[4756]: I1203 11:59:28.041410 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-745d87f76f-mbrkc_4234afcf-96f0-4340-b5c0-e1aac6c4dacb/neutron-api/0.log" Dec 03 11:59:28 crc kubenswrapper[4756]: I1203 11:59:28.067021 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7_e51da37d-c82b-43d5-b61c-5199ca9321d2/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:28 crc kubenswrapper[4756]: I1203 11:59:28.693289 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9c862336-6a53-419e-aaf8-f64358150259/nova-cell0-conductor-conductor/0.log" Dec 03 11:59:28 crc kubenswrapper[4756]: I1203 11:59:28.761560 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9c016bc8-281b-4dcc-9475-9c373175f026/nova-api-log/0.log" Dec 03 11:59:28 crc kubenswrapper[4756]: I1203 11:59:28.888597 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9c016bc8-281b-4dcc-9475-9c373175f026/nova-api-api/0.log" Dec 03 11:59:29 crc kubenswrapper[4756]: I1203 11:59:29.240421 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:59:29 crc kubenswrapper[4756]: E1203 11:59:29.240789 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:59:29 crc kubenswrapper[4756]: I1203 11:59:29.542080 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_7945f798-6dd7-4887-9cb3-72852427cf8e/nova-cell1-conductor-conductor/0.log" Dec 03 11:59:29 crc kubenswrapper[4756]: I1203 11:59:29.575649 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4914547a-e3af-4ae6-8f6b-a210ed169dfd/nova-cell1-novncproxy-novncproxy/0.log" Dec 03 11:59:29 crc kubenswrapper[4756]: I1203 11:59:29.796357 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-jv7s6_edbbebac-e053-45ad-9b17-83d0e55fba86/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:29 crc kubenswrapper[4756]: I1203 11:59:29.963547 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aae26896-15a3-48e5-b96a-136209092056/nova-metadata-log/0.log" Dec 03 11:59:30 crc kubenswrapper[4756]: I1203 11:59:30.221765 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_50ce8020-3f9b-4085-9f66-c05f682cde05/nova-scheduler-scheduler/0.log" Dec 03 11:59:30 crc kubenswrapper[4756]: I1203 11:59:30.338066 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25a8024d-1033-41f9-a53f-6c5119388b40/mysql-bootstrap/0.log" Dec 03 11:59:30 crc kubenswrapper[4756]: I1203 11:59:30.531207 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25a8024d-1033-41f9-a53f-6c5119388b40/mysql-bootstrap/0.log" Dec 03 11:59:30 crc kubenswrapper[4756]: I1203 11:59:30.554808 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25a8024d-1033-41f9-a53f-6c5119388b40/galera/0.log" Dec 03 11:59:31 crc kubenswrapper[4756]: I1203 11:59:31.128158 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d6980bf2-fd5f-4cb1-b148-414229444006/mysql-bootstrap/0.log" Dec 03 11:59:31 crc kubenswrapper[4756]: I1203 11:59:31.229035 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aae26896-15a3-48e5-b96a-136209092056/nova-metadata-metadata/0.log" Dec 03 11:59:31 crc kubenswrapper[4756]: I1203 11:59:31.326390 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d6980bf2-fd5f-4cb1-b148-414229444006/galera/0.log" Dec 03 11:59:31 crc kubenswrapper[4756]: I1203 11:59:31.402325 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d6980bf2-fd5f-4cb1-b148-414229444006/mysql-bootstrap/0.log" Dec 03 11:59:31 crc kubenswrapper[4756]: I1203 11:59:31.503308 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8907533b-6dc9-48f9-8938-7089e2c0cbf5/openstackclient/0.log" Dec 03 11:59:31 crc kubenswrapper[4756]: I1203 11:59:31.648837 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-9vc6f_7b8a2775-a311-44e5-80da-356fcba8da63/openstack-network-exporter/0.log" Dec 03 11:59:31 crc kubenswrapper[4756]: I1203 11:59:31.765548 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-269cd_1a408782-00c0-46f6-8559-023f8753699e/ovsdb-server-init/0.log" Dec 03 11:59:31 crc kubenswrapper[4756]: I1203 11:59:31.968281 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-269cd_1a408782-00c0-46f6-8559-023f8753699e/ovs-vswitchd/0.log" Dec 03 11:59:32 crc kubenswrapper[4756]: I1203 11:59:32.004913 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-269cd_1a408782-00c0-46f6-8559-023f8753699e/ovsdb-server-init/0.log" Dec 03 11:59:32 crc kubenswrapper[4756]: I1203 11:59:32.061900 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-269cd_1a408782-00c0-46f6-8559-023f8753699e/ovsdb-server/0.log" Dec 03 11:59:32 crc kubenswrapper[4756]: I1203 11:59:32.250865 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xlz9h_e033887b-a32e-4141-9812-455b70f85d39/ovn-controller/0.log" Dec 03 11:59:32 crc kubenswrapper[4756]: I1203 11:59:32.348913 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-zrq6q_33ee231c-20f0-429c-92a2-7001e843e8b3/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:32 crc kubenswrapper[4756]: I1203 11:59:32.446242 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0976760e-f227-4d4e-a8a3-ed0ac129702c/openstack-network-exporter/0.log" Dec 03 11:59:32 crc kubenswrapper[4756]: I1203 11:59:32.514629 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0976760e-f227-4d4e-a8a3-ed0ac129702c/ovn-northd/0.log" Dec 03 11:59:32 crc kubenswrapper[4756]: I1203 11:59:32.655195 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e429c7e2-748f-4231-902c-00290ebe9eb9/openstack-network-exporter/0.log" Dec 03 11:59:32 crc kubenswrapper[4756]: I1203 11:59:32.688046 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e429c7e2-748f-4231-902c-00290ebe9eb9/ovsdbserver-nb/0.log" Dec 03 11:59:32 crc kubenswrapper[4756]: I1203 11:59:32.931527 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_77eb8ce5-5779-43bf-a57b-7ace73542f58/openstack-network-exporter/0.log" Dec 03 11:59:32 crc kubenswrapper[4756]: I1203 11:59:32.938707 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_77eb8ce5-5779-43bf-a57b-7ace73542f58/ovsdbserver-sb/0.log" Dec 03 11:59:33 crc kubenswrapper[4756]: I1203 11:59:33.117763 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d57c9944-4rhnd_10ef169a-f6d1-4d7e-9ff1-8cca85adce2b/placement-api/0.log" Dec 03 11:59:33 crc kubenswrapper[4756]: I1203 11:59:33.153102 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1bfa9eab-e774-49f9-b1f6-f2afba51c9ae/setup-container/0.log" Dec 03 11:59:33 crc kubenswrapper[4756]: I1203 11:59:33.205727 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d57c9944-4rhnd_10ef169a-f6d1-4d7e-9ff1-8cca85adce2b/placement-log/0.log" Dec 03 11:59:33 crc kubenswrapper[4756]: I1203 11:59:33.525743 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1bfa9eab-e774-49f9-b1f6-f2afba51c9ae/setup-container/0.log" Dec 03 11:59:33 crc kubenswrapper[4756]: I1203 11:59:33.556987 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a347e9e2-376b-44ac-92de-25736c30ec1e/setup-container/0.log" Dec 03 11:59:33 crc kubenswrapper[4756]: I1203 11:59:33.557910 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1bfa9eab-e774-49f9-b1f6-f2afba51c9ae/rabbitmq/0.log" Dec 03 11:59:33 crc kubenswrapper[4756]: I1203 11:59:33.772446 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a347e9e2-376b-44ac-92de-25736c30ec1e/setup-container/0.log" Dec 03 11:59:33 crc kubenswrapper[4756]: I1203 11:59:33.881931 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj_8e9f7a37-9bb0-4f3e-bdfd-962164857651/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:33 crc kubenswrapper[4756]: I1203 11:59:33.917650 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a347e9e2-376b-44ac-92de-25736c30ec1e/rabbitmq/0.log" Dec 03 11:59:34 crc kubenswrapper[4756]: I1203 11:59:34.099973 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xksxt_88321f96-08ea-4c4d-9665-8530b28e1a66/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:34 crc kubenswrapper[4756]: I1203 11:59:34.182742 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-klk22_1aab3068-a578-4f78-8326-53aeb4dd74bf/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:34 crc kubenswrapper[4756]: I1203 11:59:34.332030 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-wrchj_6252d786-f230-4eed-bc78-b688e07c12e7/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:34 crc kubenswrapper[4756]: I1203 11:59:34.522294 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-rmcgx_5b63b8f2-0259-45ce-b2c3-18b043ee2fab/ssh-known-hosts-edpm-deployment/0.log" Dec 03 11:59:34 crc kubenswrapper[4756]: I1203 11:59:34.722395 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-cdcf55d99-k7bmw_2f962509-8bee-4b75-a51f-f517ffa88908/proxy-server/0.log" Dec 03 11:59:34 crc kubenswrapper[4756]: I1203 11:59:34.775921 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-cdcf55d99-k7bmw_2f962509-8bee-4b75-a51f-f517ffa88908/proxy-httpd/0.log" Dec 03 11:59:34 crc kubenswrapper[4756]: I1203 11:59:34.904824 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-zbzgq_5cfbb2e2-672e-48fb-8916-ccb83e962bf3/swift-ring-rebalance/0.log" Dec 03 11:59:35 crc kubenswrapper[4756]: I1203 11:59:35.044487 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/account-auditor/0.log" Dec 03 11:59:35 crc kubenswrapper[4756]: I1203 11:59:35.122105 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/account-reaper/0.log" Dec 03 11:59:35 crc kubenswrapper[4756]: I1203 11:59:35.192450 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/account-replicator/0.log" Dec 03 11:59:35 crc kubenswrapper[4756]: I1203 11:59:35.286326 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/account-server/0.log" Dec 03 11:59:35 crc kubenswrapper[4756]: I1203 11:59:35.353796 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/container-auditor/0.log" Dec 03 11:59:35 crc kubenswrapper[4756]: I1203 11:59:35.450711 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/container-replicator/0.log" Dec 03 11:59:35 crc kubenswrapper[4756]: I1203 11:59:35.521630 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/container-server/0.log" Dec 03 11:59:35 crc kubenswrapper[4756]: I1203 11:59:35.541417 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/container-updater/0.log" Dec 03 11:59:35 crc kubenswrapper[4756]: I1203 11:59:35.599624 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/object-auditor/0.log" Dec 03 11:59:35 crc kubenswrapper[4756]: I1203 11:59:35.709130 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/object-expirer/0.log" Dec 03 11:59:35 crc kubenswrapper[4756]: I1203 11:59:35.763582 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/object-replicator/0.log" Dec 03 11:59:35 crc kubenswrapper[4756]: I1203 11:59:35.778710 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/object-server/0.log" Dec 03 11:59:35 crc kubenswrapper[4756]: I1203 11:59:35.908637 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/object-updater/0.log" Dec 03 11:59:36 crc kubenswrapper[4756]: I1203 11:59:36.010153 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/rsync/0.log" Dec 03 11:59:36 crc kubenswrapper[4756]: I1203 11:59:36.023708 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/swift-recon-cron/0.log" Dec 03 11:59:36 crc kubenswrapper[4756]: I1203 11:59:36.196264 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-89lxh_9dab21b5-7428-46b5-8d98-956b18345f6d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:36 crc kubenswrapper[4756]: I1203 11:59:36.389104 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_f9a7368b-5739-4366-8a70-e33f19837e9a/tempest-tests-tempest-tests-runner/0.log" Dec 03 11:59:36 crc kubenswrapper[4756]: I1203 11:59:36.462851 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b9103e77-3037-4d26-946a-822bdd2ba611/test-operator-logs-container/0.log" Dec 03 11:59:36 crc kubenswrapper[4756]: I1203 11:59:36.625281 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk_cc23e11e-fc64-4cce-8ab5-4e63f64ccb11/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 11:59:42 crc kubenswrapper[4756]: I1203 11:59:42.234646 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:59:42 crc kubenswrapper[4756]: E1203 11:59:42.235378 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 11:59:45 crc kubenswrapper[4756]: I1203 11:59:45.147646 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_81dedd61-3ae8-42b1-8af2-20fe40b22eb7/memcached/0.log" Dec 03 11:59:57 crc kubenswrapper[4756]: I1203 11:59:57.234047 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 11:59:57 crc kubenswrapper[4756]: I1203 11:59:57.889220 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"228e33b676c4abc082048d97b3f25a6d75245a7a0ec40b23df0b7da2dce8ecca"} Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.176033 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs"] Dec 03 12:00:00 crc kubenswrapper[4756]: E1203 12:00:00.177108 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f4f101-39cd-469b-b8df-05fca1863626" containerName="container-00" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.177124 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f4f101-39cd-469b-b8df-05fca1863626" containerName="container-00" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.177348 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f4f101-39cd-469b-b8df-05fca1863626" containerName="container-00" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.178206 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.180782 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.181084 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.205288 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs"] Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.249267 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c9889b2-0d83-4a74-881c-13122aeadd96-secret-volume\") pod \"collect-profiles-29412720-2fwhs\" (UID: \"8c9889b2-0d83-4a74-881c-13122aeadd96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.249362 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c9889b2-0d83-4a74-881c-13122aeadd96-config-volume\") pod \"collect-profiles-29412720-2fwhs\" (UID: \"8c9889b2-0d83-4a74-881c-13122aeadd96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.249586 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjwd9\" (UniqueName: \"kubernetes.io/projected/8c9889b2-0d83-4a74-881c-13122aeadd96-kube-api-access-fjwd9\") pod \"collect-profiles-29412720-2fwhs\" (UID: \"8c9889b2-0d83-4a74-881c-13122aeadd96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.351765 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c9889b2-0d83-4a74-881c-13122aeadd96-secret-volume\") pod \"collect-profiles-29412720-2fwhs\" (UID: \"8c9889b2-0d83-4a74-881c-13122aeadd96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.351844 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c9889b2-0d83-4a74-881c-13122aeadd96-config-volume\") pod \"collect-profiles-29412720-2fwhs\" (UID: \"8c9889b2-0d83-4a74-881c-13122aeadd96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.352910 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c9889b2-0d83-4a74-881c-13122aeadd96-config-volume\") pod \"collect-profiles-29412720-2fwhs\" (UID: \"8c9889b2-0d83-4a74-881c-13122aeadd96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.354252 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjwd9\" (UniqueName: \"kubernetes.io/projected/8c9889b2-0d83-4a74-881c-13122aeadd96-kube-api-access-fjwd9\") pod \"collect-profiles-29412720-2fwhs\" (UID: \"8c9889b2-0d83-4a74-881c-13122aeadd96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.367471 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c9889b2-0d83-4a74-881c-13122aeadd96-secret-volume\") pod \"collect-profiles-29412720-2fwhs\" (UID: \"8c9889b2-0d83-4a74-881c-13122aeadd96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.377660 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjwd9\" (UniqueName: \"kubernetes.io/projected/8c9889b2-0d83-4a74-881c-13122aeadd96-kube-api-access-fjwd9\") pod \"collect-profiles-29412720-2fwhs\" (UID: \"8c9889b2-0d83-4a74-881c-13122aeadd96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" Dec 03 12:00:00 crc kubenswrapper[4756]: I1203 12:00:00.510857 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" Dec 03 12:00:01 crc kubenswrapper[4756]: I1203 12:00:01.024846 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs"] Dec 03 12:00:01 crc kubenswrapper[4756]: W1203 12:00:01.035253 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c9889b2_0d83_4a74_881c_13122aeadd96.slice/crio-b49caecbdbfd457f48af9fc003f039719f3bc41914e9c6d0e1d3f3bce2c26705 WatchSource:0}: Error finding container b49caecbdbfd457f48af9fc003f039719f3bc41914e9c6d0e1d3f3bce2c26705: Status 404 returned error can't find the container with id b49caecbdbfd457f48af9fc003f039719f3bc41914e9c6d0e1d3f3bce2c26705 Dec 03 12:00:01 crc kubenswrapper[4756]: I1203 12:00:01.930598 4756 generic.go:334] "Generic (PLEG): container finished" podID="8c9889b2-0d83-4a74-881c-13122aeadd96" containerID="8bea3da4073ffc2f6a04b3eec41d2f41862be12095218fd27bb6e43c1db53604" exitCode=0 Dec 03 12:00:01 crc kubenswrapper[4756]: I1203 12:00:01.930725 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" event={"ID":"8c9889b2-0d83-4a74-881c-13122aeadd96","Type":"ContainerDied","Data":"8bea3da4073ffc2f6a04b3eec41d2f41862be12095218fd27bb6e43c1db53604"} Dec 03 12:00:01 crc kubenswrapper[4756]: I1203 12:00:01.931154 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" event={"ID":"8c9889b2-0d83-4a74-881c-13122aeadd96","Type":"ContainerStarted","Data":"b49caecbdbfd457f48af9fc003f039719f3bc41914e9c6d0e1d3f3bce2c26705"} Dec 03 12:00:03 crc kubenswrapper[4756]: I1203 12:00:03.309849 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" Dec 03 12:00:03 crc kubenswrapper[4756]: I1203 12:00:03.431573 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c9889b2-0d83-4a74-881c-13122aeadd96-config-volume\") pod \"8c9889b2-0d83-4a74-881c-13122aeadd96\" (UID: \"8c9889b2-0d83-4a74-881c-13122aeadd96\") " Dec 03 12:00:03 crc kubenswrapper[4756]: I1203 12:00:03.431652 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c9889b2-0d83-4a74-881c-13122aeadd96-secret-volume\") pod \"8c9889b2-0d83-4a74-881c-13122aeadd96\" (UID: \"8c9889b2-0d83-4a74-881c-13122aeadd96\") " Dec 03 12:00:03 crc kubenswrapper[4756]: I1203 12:00:03.431695 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjwd9\" (UniqueName: \"kubernetes.io/projected/8c9889b2-0d83-4a74-881c-13122aeadd96-kube-api-access-fjwd9\") pod \"8c9889b2-0d83-4a74-881c-13122aeadd96\" (UID: \"8c9889b2-0d83-4a74-881c-13122aeadd96\") " Dec 03 12:00:03 crc kubenswrapper[4756]: I1203 12:00:03.432886 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c9889b2-0d83-4a74-881c-13122aeadd96-config-volume" (OuterVolumeSpecName: "config-volume") pod "8c9889b2-0d83-4a74-881c-13122aeadd96" (UID: "8c9889b2-0d83-4a74-881c-13122aeadd96"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:00:03 crc kubenswrapper[4756]: I1203 12:00:03.533943 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c9889b2-0d83-4a74-881c-13122aeadd96-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 12:00:03 crc kubenswrapper[4756]: I1203 12:00:03.791441 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c9889b2-0d83-4a74-881c-13122aeadd96-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8c9889b2-0d83-4a74-881c-13122aeadd96" (UID: "8c9889b2-0d83-4a74-881c-13122aeadd96"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:00:03 crc kubenswrapper[4756]: I1203 12:00:03.791772 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c9889b2-0d83-4a74-881c-13122aeadd96-kube-api-access-fjwd9" (OuterVolumeSpecName: "kube-api-access-fjwd9") pod "8c9889b2-0d83-4a74-881c-13122aeadd96" (UID: "8c9889b2-0d83-4a74-881c-13122aeadd96"). InnerVolumeSpecName "kube-api-access-fjwd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:00:03 crc kubenswrapper[4756]: I1203 12:00:03.841770 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c9889b2-0d83-4a74-881c-13122aeadd96-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 12:00:03 crc kubenswrapper[4756]: I1203 12:00:03.841833 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjwd9\" (UniqueName: \"kubernetes.io/projected/8c9889b2-0d83-4a74-881c-13122aeadd96-kube-api-access-fjwd9\") on node \"crc\" DevicePath \"\"" Dec 03 12:00:03 crc kubenswrapper[4756]: I1203 12:00:03.953305 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" event={"ID":"8c9889b2-0d83-4a74-881c-13122aeadd96","Type":"ContainerDied","Data":"b49caecbdbfd457f48af9fc003f039719f3bc41914e9c6d0e1d3f3bce2c26705"} Dec 03 12:00:03 crc kubenswrapper[4756]: I1203 12:00:03.953365 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b49caecbdbfd457f48af9fc003f039719f3bc41914e9c6d0e1d3f3bce2c26705" Dec 03 12:00:03 crc kubenswrapper[4756]: I1203 12:00:03.953368 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412720-2fwhs" Dec 03 12:00:04 crc kubenswrapper[4756]: I1203 12:00:04.399552 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6"] Dec 03 12:00:04 crc kubenswrapper[4756]: I1203 12:00:04.407644 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412675-j86q6"] Dec 03 12:00:05 crc kubenswrapper[4756]: I1203 12:00:05.245863 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a4a245b-e6cc-418e-9fbc-6270c50fb523" path="/var/lib/kubelet/pods/0a4a245b-e6cc-418e-9fbc-6270c50fb523/volumes" Dec 03 12:00:05 crc kubenswrapper[4756]: I1203 12:00:05.950862 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9_3a99ddd7-a877-4ce4-b97c-65350ab2af24/util/0.log" Dec 03 12:00:06 crc kubenswrapper[4756]: I1203 12:00:06.177131 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9_3a99ddd7-a877-4ce4-b97c-65350ab2af24/util/0.log" Dec 03 12:00:06 crc kubenswrapper[4756]: I1203 12:00:06.177527 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9_3a99ddd7-a877-4ce4-b97c-65350ab2af24/pull/0.log" Dec 03 12:00:06 crc kubenswrapper[4756]: I1203 12:00:06.181611 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9_3a99ddd7-a877-4ce4-b97c-65350ab2af24/pull/0.log" Dec 03 12:00:06 crc kubenswrapper[4756]: I1203 12:00:06.358417 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9_3a99ddd7-a877-4ce4-b97c-65350ab2af24/pull/0.log" Dec 03 12:00:06 crc kubenswrapper[4756]: I1203 12:00:06.384077 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9_3a99ddd7-a877-4ce4-b97c-65350ab2af24/extract/0.log" Dec 03 12:00:06 crc kubenswrapper[4756]: I1203 12:00:06.389512 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9_3a99ddd7-a877-4ce4-b97c-65350ab2af24/util/0.log" Dec 03 12:00:06 crc kubenswrapper[4756]: I1203 12:00:06.583932 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-cg2g8_02ff397c-db50-4b3b-be2c-b43dcd1c71db/kube-rbac-proxy/0.log" Dec 03 12:00:06 crc kubenswrapper[4756]: I1203 12:00:06.662834 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-vqlmr_22eded41-fc81-4a9c-b831-8cfb8d339258/kube-rbac-proxy/0.log" Dec 03 12:00:06 crc kubenswrapper[4756]: I1203 12:00:06.738341 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-cg2g8_02ff397c-db50-4b3b-be2c-b43dcd1c71db/manager/0.log" Dec 03 12:00:06 crc kubenswrapper[4756]: I1203 12:00:06.844635 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-vqlmr_22eded41-fc81-4a9c-b831-8cfb8d339258/manager/0.log" Dec 03 12:00:06 crc kubenswrapper[4756]: I1203 12:00:06.898634 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-62955_f651f26f-55f4-47cb-a318-0e2a9512f194/kube-rbac-proxy/0.log" Dec 03 12:00:07 crc kubenswrapper[4756]: I1203 12:00:07.024020 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-62955_f651f26f-55f4-47cb-a318-0e2a9512f194/manager/0.log" Dec 03 12:00:07 crc kubenswrapper[4756]: I1203 12:00:07.132641 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-q5klh_77626dde-3586-4b33-b4a4-326bab5bfe19/kube-rbac-proxy/0.log" Dec 03 12:00:07 crc kubenswrapper[4756]: I1203 12:00:07.153641 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-q5klh_77626dde-3586-4b33-b4a4-326bab5bfe19/manager/0.log" Dec 03 12:00:07 crc kubenswrapper[4756]: I1203 12:00:07.345347 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-gqmt7_0f3ba133-8575-4603-ad87-77502244b892/kube-rbac-proxy/0.log" Dec 03 12:00:07 crc kubenswrapper[4756]: I1203 12:00:07.400685 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-gqmt7_0f3ba133-8575-4603-ad87-77502244b892/manager/0.log" Dec 03 12:00:07 crc kubenswrapper[4756]: I1203 12:00:07.512598 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-crd8q_d605c327-c0d8-4466-b135-a1c8c777b91c/kube-rbac-proxy/0.log" Dec 03 12:00:07 crc kubenswrapper[4756]: I1203 12:00:07.589080 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-crd8q_d605c327-c0d8-4466-b135-a1c8c777b91c/manager/0.log" Dec 03 12:00:07 crc kubenswrapper[4756]: I1203 12:00:07.753389 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-hpd8p_0df6dfa3-00de-4da1-a132-358b5f6a66e9/kube-rbac-proxy/0.log" Dec 03 12:00:08 crc kubenswrapper[4756]: I1203 12:00:08.012857 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-hc7n7_786fcded-8103-4691-b8a4-fa6ef5b79ee6/kube-rbac-proxy/0.log" Dec 03 12:00:08 crc kubenswrapper[4756]: I1203 12:00:08.042075 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-hc7n7_786fcded-8103-4691-b8a4-fa6ef5b79ee6/manager/0.log" Dec 03 12:00:08 crc kubenswrapper[4756]: I1203 12:00:08.184489 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-hpd8p_0df6dfa3-00de-4da1-a132-358b5f6a66e9/manager/0.log" Dec 03 12:00:08 crc kubenswrapper[4756]: I1203 12:00:08.195808 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-bgw6p_ed53f929-2cac-4053-8875-ad53414156c1/kube-rbac-proxy/0.log" Dec 03 12:00:08 crc kubenswrapper[4756]: I1203 12:00:08.395465 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-bgw6p_ed53f929-2cac-4053-8875-ad53414156c1/manager/0.log" Dec 03 12:00:08 crc kubenswrapper[4756]: I1203 12:00:08.401508 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-dk746_419295e1-b487-4701-9fc8-0273a49277dc/kube-rbac-proxy/0.log" Dec 03 12:00:08 crc kubenswrapper[4756]: I1203 12:00:08.448006 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-dk746_419295e1-b487-4701-9fc8-0273a49277dc/manager/0.log" Dec 03 12:00:08 crc kubenswrapper[4756]: I1203 12:00:08.618081 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-2kq6w_0606b8cc-309a-4759-82d9-989ef224169b/manager/0.log" Dec 03 12:00:08 crc kubenswrapper[4756]: I1203 12:00:08.628238 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-2kq6w_0606b8cc-309a-4759-82d9-989ef224169b/kube-rbac-proxy/0.log" Dec 03 12:00:08 crc kubenswrapper[4756]: I1203 12:00:08.863717 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-xdlgn_ef02c569-7cc5-43a3-a4e9-d8c97cc07465/kube-rbac-proxy/0.log" Dec 03 12:00:08 crc kubenswrapper[4756]: I1203 12:00:08.953571 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-cxgxw_53d098b0-4833-4b74-b13d-57a4d9c5ee13/kube-rbac-proxy/0.log" Dec 03 12:00:08 crc kubenswrapper[4756]: I1203 12:00:08.967278 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-xdlgn_ef02c569-7cc5-43a3-a4e9-d8c97cc07465/manager/0.log" Dec 03 12:00:09 crc kubenswrapper[4756]: I1203 12:00:09.118627 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-cxgxw_53d098b0-4833-4b74-b13d-57a4d9c5ee13/manager/0.log" Dec 03 12:00:09 crc kubenswrapper[4756]: I1203 12:00:09.154164 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-wlfff_52cfac1b-a56f-4189-b9f0-5c4a8acf2069/manager/0.log" Dec 03 12:00:09 crc kubenswrapper[4756]: I1203 12:00:09.216410 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-wlfff_52cfac1b-a56f-4189-b9f0-5c4a8acf2069/kube-rbac-proxy/0.log" Dec 03 12:00:09 crc kubenswrapper[4756]: I1203 12:00:09.356454 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4_02b366f7-138d-4a78-9772-8e22db219753/kube-rbac-proxy/0.log" Dec 03 12:00:09 crc kubenswrapper[4756]: I1203 12:00:09.380287 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4_02b366f7-138d-4a78-9772-8e22db219753/manager/0.log" Dec 03 12:00:09 crc kubenswrapper[4756]: I1203 12:00:09.734141 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-779dc79ddf-bn5rj_3ae84629-3d85-49fb-a3d9-93c766c1be75/operator/0.log" Dec 03 12:00:09 crc kubenswrapper[4756]: I1203 12:00:09.805329 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5r2k8_1b7d149f-f0e8-4a81-b7c6-e581a6f12d06/registry-server/0.log" Dec 03 12:00:09 crc kubenswrapper[4756]: I1203 12:00:09.934488 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-bbqwn_ce02ca0e-abd9-4a57-a68d-7d35f304f8fa/kube-rbac-proxy/0.log" Dec 03 12:00:10 crc kubenswrapper[4756]: I1203 12:00:10.067900 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-9r6sc_ded35123-da5e-4a26-9caf-61d6c9d920cd/kube-rbac-proxy/0.log" Dec 03 12:00:10 crc kubenswrapper[4756]: I1203 12:00:10.099517 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-bbqwn_ce02ca0e-abd9-4a57-a68d-7d35f304f8fa/manager/0.log" Dec 03 12:00:10 crc kubenswrapper[4756]: I1203 12:00:10.222941 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-9r6sc_ded35123-da5e-4a26-9caf-61d6c9d920cd/manager/0.log" Dec 03 12:00:10 crc kubenswrapper[4756]: I1203 12:00:10.303586 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fkx8p_0b3df789-34ca-4ab2-8fb7-a8fee4df46a7/operator/0.log" Dec 03 12:00:10 crc kubenswrapper[4756]: I1203 12:00:10.452501 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-v4kqz_e8df208a-e66b-4532-bb65-8c673f2659bc/kube-rbac-proxy/0.log" Dec 03 12:00:10 crc kubenswrapper[4756]: I1203 12:00:10.533980 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-v4kqz_e8df208a-e66b-4532-bb65-8c673f2659bc/manager/0.log" Dec 03 12:00:10 crc kubenswrapper[4756]: I1203 12:00:10.681568 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-9t6v5_6c2df820-90b9-48fe-8dd0-8731028d0dbd/kube-rbac-proxy/0.log" Dec 03 12:00:10 crc kubenswrapper[4756]: I1203 12:00:10.897199 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-srv8g_a5dd2be8-1335-43f1-83af-2a0efabcce1e/kube-rbac-proxy/0.log" Dec 03 12:00:10 crc kubenswrapper[4756]: I1203 12:00:10.905159 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-srv8g_a5dd2be8-1335-43f1-83af-2a0efabcce1e/manager/0.log" Dec 03 12:00:11 crc kubenswrapper[4756]: I1203 12:00:11.140165 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-g6zjw_77043fde-1e4d-4590-b97c-3de89953581a/kube-rbac-proxy/0.log" Dec 03 12:00:11 crc kubenswrapper[4756]: I1203 12:00:11.151413 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-g6zjw_77043fde-1e4d-4590-b97c-3de89953581a/manager/0.log" Dec 03 12:00:11 crc kubenswrapper[4756]: I1203 12:00:11.169813 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6c5c989645-dsrph_d88ee2e4-3954-487f-991e-b0f3e66b176a/manager/0.log" Dec 03 12:00:11 crc kubenswrapper[4756]: I1203 12:00:11.474244 4756 scope.go:117] "RemoveContainer" containerID="1e4521f4a388aff990ad75d51e2328e9aae105ed9f3f21397f9e99c62c1b741e" Dec 03 12:00:12 crc kubenswrapper[4756]: I1203 12:00:12.472820 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-9t6v5_6c2df820-90b9-48fe-8dd0-8731028d0dbd/manager/0.log" Dec 03 12:00:34 crc kubenswrapper[4756]: I1203 12:00:34.677450 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kpxd4_88056822-ddb3-47aa-b15e-f344471f6b0a/control-plane-machine-set-operator/0.log" Dec 03 12:00:34 crc kubenswrapper[4756]: I1203 12:00:34.833687 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jgrd6"] Dec 03 12:00:34 crc kubenswrapper[4756]: E1203 12:00:34.834253 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c9889b2-0d83-4a74-881c-13122aeadd96" containerName="collect-profiles" Dec 03 12:00:34 crc kubenswrapper[4756]: I1203 12:00:34.834279 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c9889b2-0d83-4a74-881c-13122aeadd96" containerName="collect-profiles" Dec 03 12:00:34 crc kubenswrapper[4756]: I1203 12:00:34.834583 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c9889b2-0d83-4a74-881c-13122aeadd96" containerName="collect-profiles" Dec 03 12:00:34 crc kubenswrapper[4756]: I1203 12:00:34.836490 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:34 crc kubenswrapper[4756]: I1203 12:00:34.845372 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jgrd6"] Dec 03 12:00:34 crc kubenswrapper[4756]: I1203 12:00:34.913892 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gjjx2_35183c23-2ddd-4984-8ba9-d86765b138ce/kube-rbac-proxy/0.log" Dec 03 12:00:34 crc kubenswrapper[4756]: I1203 12:00:34.932542 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gjjx2_35183c23-2ddd-4984-8ba9-d86765b138ce/machine-api-operator/0.log" Dec 03 12:00:35 crc kubenswrapper[4756]: I1203 12:00:35.014117 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10a22190-856a-4e64-9fb3-04ffa8589b1f-utilities\") pod \"redhat-operators-jgrd6\" (UID: \"10a22190-856a-4e64-9fb3-04ffa8589b1f\") " pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:35 crc kubenswrapper[4756]: I1203 12:00:35.014393 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10a22190-856a-4e64-9fb3-04ffa8589b1f-catalog-content\") pod \"redhat-operators-jgrd6\" (UID: \"10a22190-856a-4e64-9fb3-04ffa8589b1f\") " pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:35 crc kubenswrapper[4756]: I1203 12:00:35.014431 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjwn9\" (UniqueName: \"kubernetes.io/projected/10a22190-856a-4e64-9fb3-04ffa8589b1f-kube-api-access-kjwn9\") pod \"redhat-operators-jgrd6\" (UID: \"10a22190-856a-4e64-9fb3-04ffa8589b1f\") " pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:35 crc kubenswrapper[4756]: I1203 12:00:35.116199 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10a22190-856a-4e64-9fb3-04ffa8589b1f-utilities\") pod \"redhat-operators-jgrd6\" (UID: \"10a22190-856a-4e64-9fb3-04ffa8589b1f\") " pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:35 crc kubenswrapper[4756]: I1203 12:00:35.116339 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10a22190-856a-4e64-9fb3-04ffa8589b1f-catalog-content\") pod \"redhat-operators-jgrd6\" (UID: \"10a22190-856a-4e64-9fb3-04ffa8589b1f\") " pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:35 crc kubenswrapper[4756]: I1203 12:00:35.116378 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjwn9\" (UniqueName: \"kubernetes.io/projected/10a22190-856a-4e64-9fb3-04ffa8589b1f-kube-api-access-kjwn9\") pod \"redhat-operators-jgrd6\" (UID: \"10a22190-856a-4e64-9fb3-04ffa8589b1f\") " pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:35 crc kubenswrapper[4756]: I1203 12:00:35.116762 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10a22190-856a-4e64-9fb3-04ffa8589b1f-utilities\") pod \"redhat-operators-jgrd6\" (UID: \"10a22190-856a-4e64-9fb3-04ffa8589b1f\") " pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:35 crc kubenswrapper[4756]: I1203 12:00:35.117220 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10a22190-856a-4e64-9fb3-04ffa8589b1f-catalog-content\") pod \"redhat-operators-jgrd6\" (UID: \"10a22190-856a-4e64-9fb3-04ffa8589b1f\") " pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:35 crc kubenswrapper[4756]: I1203 12:00:35.151721 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjwn9\" (UniqueName: \"kubernetes.io/projected/10a22190-856a-4e64-9fb3-04ffa8589b1f-kube-api-access-kjwn9\") pod \"redhat-operators-jgrd6\" (UID: \"10a22190-856a-4e64-9fb3-04ffa8589b1f\") " pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:35 crc kubenswrapper[4756]: I1203 12:00:35.156263 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:35 crc kubenswrapper[4756]: I1203 12:00:35.655571 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jgrd6"] Dec 03 12:00:36 crc kubenswrapper[4756]: I1203 12:00:36.243849 4756 generic.go:334] "Generic (PLEG): container finished" podID="10a22190-856a-4e64-9fb3-04ffa8589b1f" containerID="09c44bba3caa7b1b50a4ee023adcaf774aae1ae48c6ee7916213cd3621f4b724" exitCode=0 Dec 03 12:00:36 crc kubenswrapper[4756]: I1203 12:00:36.244056 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgrd6" event={"ID":"10a22190-856a-4e64-9fb3-04ffa8589b1f","Type":"ContainerDied","Data":"09c44bba3caa7b1b50a4ee023adcaf774aae1ae48c6ee7916213cd3621f4b724"} Dec 03 12:00:36 crc kubenswrapper[4756]: I1203 12:00:36.244215 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgrd6" event={"ID":"10a22190-856a-4e64-9fb3-04ffa8589b1f","Type":"ContainerStarted","Data":"747e06cb6d98067cc3d34e741c15caecdee218e97a62b0e780aad107992faf9a"} Dec 03 12:00:39 crc kubenswrapper[4756]: I1203 12:00:39.399752 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgrd6" event={"ID":"10a22190-856a-4e64-9fb3-04ffa8589b1f","Type":"ContainerStarted","Data":"b4b1f8d32116a7b0148f30f3a4329bf5f54d356f36014ca76b15216df37e9c21"} Dec 03 12:00:40 crc kubenswrapper[4756]: I1203 12:00:40.411519 4756 generic.go:334] "Generic (PLEG): container finished" podID="10a22190-856a-4e64-9fb3-04ffa8589b1f" containerID="b4b1f8d32116a7b0148f30f3a4329bf5f54d356f36014ca76b15216df37e9c21" exitCode=0 Dec 03 12:00:40 crc kubenswrapper[4756]: I1203 12:00:40.411635 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgrd6" event={"ID":"10a22190-856a-4e64-9fb3-04ffa8589b1f","Type":"ContainerDied","Data":"b4b1f8d32116a7b0148f30f3a4329bf5f54d356f36014ca76b15216df37e9c21"} Dec 03 12:00:41 crc kubenswrapper[4756]: I1203 12:00:41.421884 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgrd6" event={"ID":"10a22190-856a-4e64-9fb3-04ffa8589b1f","Type":"ContainerStarted","Data":"daeb1a4952fb0e196a232a9190243b0897c9f6efe5e51b53390392b9a3333cc6"} Dec 03 12:00:41 crc kubenswrapper[4756]: I1203 12:00:41.450499 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jgrd6" podStartSLOduration=3.623412727 podStartE2EDuration="7.450467783s" podCreationTimestamp="2025-12-03 12:00:34 +0000 UTC" firstStartedPulling="2025-12-03 12:00:37.254019914 +0000 UTC m=+4048.284021148" lastFinishedPulling="2025-12-03 12:00:41.08107496 +0000 UTC m=+4052.111076204" observedRunningTime="2025-12-03 12:00:41.441880424 +0000 UTC m=+4052.471881688" watchObservedRunningTime="2025-12-03 12:00:41.450467783 +0000 UTC m=+4052.480469027" Dec 03 12:00:45 crc kubenswrapper[4756]: I1203 12:00:45.157728 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:45 crc kubenswrapper[4756]: I1203 12:00:45.158336 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:46 crc kubenswrapper[4756]: I1203 12:00:46.213315 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jgrd6" podUID="10a22190-856a-4e64-9fb3-04ffa8589b1f" containerName="registry-server" probeResult="failure" output=< Dec 03 12:00:46 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Dec 03 12:00:46 crc kubenswrapper[4756]: > Dec 03 12:00:51 crc kubenswrapper[4756]: I1203 12:00:51.843230 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-bxjtd_6ab38bf9-5ba7-4205-82b7-30337cd2694f/cert-manager-controller/0.log" Dec 03 12:00:52 crc kubenswrapper[4756]: I1203 12:00:52.160929 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-j2l27_23b87262-d9c4-45f6-8cc7-711f71e1a6c0/cert-manager-webhook/0.log" Dec 03 12:00:52 crc kubenswrapper[4756]: I1203 12:00:52.165567 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-lpmgs_15eea9f7-71bc-4b1d-810a-8dd3da3015f2/cert-manager-cainjector/0.log" Dec 03 12:00:55 crc kubenswrapper[4756]: I1203 12:00:55.208780 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:55 crc kubenswrapper[4756]: I1203 12:00:55.260440 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:55 crc kubenswrapper[4756]: I1203 12:00:55.450098 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jgrd6"] Dec 03 12:00:56 crc kubenswrapper[4756]: I1203 12:00:56.547760 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jgrd6" podUID="10a22190-856a-4e64-9fb3-04ffa8589b1f" containerName="registry-server" containerID="cri-o://daeb1a4952fb0e196a232a9190243b0897c9f6efe5e51b53390392b9a3333cc6" gracePeriod=2 Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.058581 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.188736 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjwn9\" (UniqueName: \"kubernetes.io/projected/10a22190-856a-4e64-9fb3-04ffa8589b1f-kube-api-access-kjwn9\") pod \"10a22190-856a-4e64-9fb3-04ffa8589b1f\" (UID: \"10a22190-856a-4e64-9fb3-04ffa8589b1f\") " Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.188797 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10a22190-856a-4e64-9fb3-04ffa8589b1f-utilities\") pod \"10a22190-856a-4e64-9fb3-04ffa8589b1f\" (UID: \"10a22190-856a-4e64-9fb3-04ffa8589b1f\") " Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.188933 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10a22190-856a-4e64-9fb3-04ffa8589b1f-catalog-content\") pod \"10a22190-856a-4e64-9fb3-04ffa8589b1f\" (UID: \"10a22190-856a-4e64-9fb3-04ffa8589b1f\") " Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.189701 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10a22190-856a-4e64-9fb3-04ffa8589b1f-utilities" (OuterVolumeSpecName: "utilities") pod "10a22190-856a-4e64-9fb3-04ffa8589b1f" (UID: "10a22190-856a-4e64-9fb3-04ffa8589b1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.194246 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a22190-856a-4e64-9fb3-04ffa8589b1f-kube-api-access-kjwn9" (OuterVolumeSpecName: "kube-api-access-kjwn9") pod "10a22190-856a-4e64-9fb3-04ffa8589b1f" (UID: "10a22190-856a-4e64-9fb3-04ffa8589b1f"). InnerVolumeSpecName "kube-api-access-kjwn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.291733 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjwn9\" (UniqueName: \"kubernetes.io/projected/10a22190-856a-4e64-9fb3-04ffa8589b1f-kube-api-access-kjwn9\") on node \"crc\" DevicePath \"\"" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.291792 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10a22190-856a-4e64-9fb3-04ffa8589b1f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.291726 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10a22190-856a-4e64-9fb3-04ffa8589b1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10a22190-856a-4e64-9fb3-04ffa8589b1f" (UID: "10a22190-856a-4e64-9fb3-04ffa8589b1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.393464 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10a22190-856a-4e64-9fb3-04ffa8589b1f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.568542 4756 generic.go:334] "Generic (PLEG): container finished" podID="10a22190-856a-4e64-9fb3-04ffa8589b1f" containerID="daeb1a4952fb0e196a232a9190243b0897c9f6efe5e51b53390392b9a3333cc6" exitCode=0 Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.568614 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgrd6" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.568581 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgrd6" event={"ID":"10a22190-856a-4e64-9fb3-04ffa8589b1f","Type":"ContainerDied","Data":"daeb1a4952fb0e196a232a9190243b0897c9f6efe5e51b53390392b9a3333cc6"} Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.569000 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgrd6" event={"ID":"10a22190-856a-4e64-9fb3-04ffa8589b1f","Type":"ContainerDied","Data":"747e06cb6d98067cc3d34e741c15caecdee218e97a62b0e780aad107992faf9a"} Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.569022 4756 scope.go:117] "RemoveContainer" containerID="daeb1a4952fb0e196a232a9190243b0897c9f6efe5e51b53390392b9a3333cc6" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.603554 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jgrd6"] Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.603698 4756 scope.go:117] "RemoveContainer" containerID="b4b1f8d32116a7b0148f30f3a4329bf5f54d356f36014ca76b15216df37e9c21" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.614526 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jgrd6"] Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.628235 4756 scope.go:117] "RemoveContainer" containerID="09c44bba3caa7b1b50a4ee023adcaf774aae1ae48c6ee7916213cd3621f4b724" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.673578 4756 scope.go:117] "RemoveContainer" containerID="daeb1a4952fb0e196a232a9190243b0897c9f6efe5e51b53390392b9a3333cc6" Dec 03 12:00:57 crc kubenswrapper[4756]: E1203 12:00:57.674230 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daeb1a4952fb0e196a232a9190243b0897c9f6efe5e51b53390392b9a3333cc6\": container with ID starting with daeb1a4952fb0e196a232a9190243b0897c9f6efe5e51b53390392b9a3333cc6 not found: ID does not exist" containerID="daeb1a4952fb0e196a232a9190243b0897c9f6efe5e51b53390392b9a3333cc6" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.674288 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daeb1a4952fb0e196a232a9190243b0897c9f6efe5e51b53390392b9a3333cc6"} err="failed to get container status \"daeb1a4952fb0e196a232a9190243b0897c9f6efe5e51b53390392b9a3333cc6\": rpc error: code = NotFound desc = could not find container \"daeb1a4952fb0e196a232a9190243b0897c9f6efe5e51b53390392b9a3333cc6\": container with ID starting with daeb1a4952fb0e196a232a9190243b0897c9f6efe5e51b53390392b9a3333cc6 not found: ID does not exist" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.674317 4756 scope.go:117] "RemoveContainer" containerID="b4b1f8d32116a7b0148f30f3a4329bf5f54d356f36014ca76b15216df37e9c21" Dec 03 12:00:57 crc kubenswrapper[4756]: E1203 12:00:57.674814 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4b1f8d32116a7b0148f30f3a4329bf5f54d356f36014ca76b15216df37e9c21\": container with ID starting with b4b1f8d32116a7b0148f30f3a4329bf5f54d356f36014ca76b15216df37e9c21 not found: ID does not exist" containerID="b4b1f8d32116a7b0148f30f3a4329bf5f54d356f36014ca76b15216df37e9c21" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.674841 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4b1f8d32116a7b0148f30f3a4329bf5f54d356f36014ca76b15216df37e9c21"} err="failed to get container status \"b4b1f8d32116a7b0148f30f3a4329bf5f54d356f36014ca76b15216df37e9c21\": rpc error: code = NotFound desc = could not find container \"b4b1f8d32116a7b0148f30f3a4329bf5f54d356f36014ca76b15216df37e9c21\": container with ID starting with b4b1f8d32116a7b0148f30f3a4329bf5f54d356f36014ca76b15216df37e9c21 not found: ID does not exist" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.674858 4756 scope.go:117] "RemoveContainer" containerID="09c44bba3caa7b1b50a4ee023adcaf774aae1ae48c6ee7916213cd3621f4b724" Dec 03 12:00:57 crc kubenswrapper[4756]: E1203 12:00:57.675188 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09c44bba3caa7b1b50a4ee023adcaf774aae1ae48c6ee7916213cd3621f4b724\": container with ID starting with 09c44bba3caa7b1b50a4ee023adcaf774aae1ae48c6ee7916213cd3621f4b724 not found: ID does not exist" containerID="09c44bba3caa7b1b50a4ee023adcaf774aae1ae48c6ee7916213cd3621f4b724" Dec 03 12:00:57 crc kubenswrapper[4756]: I1203 12:00:57.675219 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c44bba3caa7b1b50a4ee023adcaf774aae1ae48c6ee7916213cd3621f4b724"} err="failed to get container status \"09c44bba3caa7b1b50a4ee023adcaf774aae1ae48c6ee7916213cd3621f4b724\": rpc error: code = NotFound desc = could not find container \"09c44bba3caa7b1b50a4ee023adcaf774aae1ae48c6ee7916213cd3621f4b724\": container with ID starting with 09c44bba3caa7b1b50a4ee023adcaf774aae1ae48c6ee7916213cd3621f4b724 not found: ID does not exist" Dec 03 12:00:59 crc kubenswrapper[4756]: I1203 12:00:59.246647 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10a22190-856a-4e64-9fb3-04ffa8589b1f" path="/var/lib/kubelet/pods/10a22190-856a-4e64-9fb3-04ffa8589b1f/volumes" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.163616 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29412721-gknsp"] Dec 03 12:01:00 crc kubenswrapper[4756]: E1203 12:01:00.164606 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a22190-856a-4e64-9fb3-04ffa8589b1f" containerName="extract-content" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.164639 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a22190-856a-4e64-9fb3-04ffa8589b1f" containerName="extract-content" Dec 03 12:01:00 crc kubenswrapper[4756]: E1203 12:01:00.164698 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a22190-856a-4e64-9fb3-04ffa8589b1f" containerName="registry-server" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.164713 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a22190-856a-4e64-9fb3-04ffa8589b1f" containerName="registry-server" Dec 03 12:01:00 crc kubenswrapper[4756]: E1203 12:01:00.164782 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a22190-856a-4e64-9fb3-04ffa8589b1f" containerName="extract-utilities" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.164799 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a22190-856a-4e64-9fb3-04ffa8589b1f" containerName="extract-utilities" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.165244 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a22190-856a-4e64-9fb3-04ffa8589b1f" containerName="registry-server" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.166596 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.176179 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29412721-gknsp"] Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.253155 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-combined-ca-bundle\") pod \"keystone-cron-29412721-gknsp\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.253296 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-config-data\") pod \"keystone-cron-29412721-gknsp\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.253341 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p8lc\" (UniqueName: \"kubernetes.io/projected/ff18af3a-68ab-4008-9f9d-cb5733c9a529-kube-api-access-8p8lc\") pod \"keystone-cron-29412721-gknsp\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.253431 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-fernet-keys\") pod \"keystone-cron-29412721-gknsp\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.355030 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-combined-ca-bundle\") pod \"keystone-cron-29412721-gknsp\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.355109 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-config-data\") pod \"keystone-cron-29412721-gknsp\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.355156 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p8lc\" (UniqueName: \"kubernetes.io/projected/ff18af3a-68ab-4008-9f9d-cb5733c9a529-kube-api-access-8p8lc\") pod \"keystone-cron-29412721-gknsp\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.355212 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-fernet-keys\") pod \"keystone-cron-29412721-gknsp\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.363429 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-config-data\") pod \"keystone-cron-29412721-gknsp\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.363642 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-fernet-keys\") pod \"keystone-cron-29412721-gknsp\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.363650 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-combined-ca-bundle\") pod \"keystone-cron-29412721-gknsp\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.376238 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p8lc\" (UniqueName: \"kubernetes.io/projected/ff18af3a-68ab-4008-9f9d-cb5733c9a529-kube-api-access-8p8lc\") pod \"keystone-cron-29412721-gknsp\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.502093 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:00 crc kubenswrapper[4756]: I1203 12:01:00.966154 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29412721-gknsp"] Dec 03 12:01:01 crc kubenswrapper[4756]: I1203 12:01:01.617620 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29412721-gknsp" event={"ID":"ff18af3a-68ab-4008-9f9d-cb5733c9a529","Type":"ContainerStarted","Data":"580400341ac8f446b555c9d329a1d4da445b86006f705a6929081f02c700c57c"} Dec 03 12:01:01 crc kubenswrapper[4756]: I1203 12:01:01.617936 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29412721-gknsp" event={"ID":"ff18af3a-68ab-4008-9f9d-cb5733c9a529","Type":"ContainerStarted","Data":"a15c0fd561be32e040009a238c1bb4606b59087be80857aa36b0156a6a8b6bfe"} Dec 03 12:01:01 crc kubenswrapper[4756]: I1203 12:01:01.643561 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29412721-gknsp" podStartSLOduration=1.643535355 podStartE2EDuration="1.643535355s" podCreationTimestamp="2025-12-03 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:01:01.641346855 +0000 UTC m=+4072.671348129" watchObservedRunningTime="2025-12-03 12:01:01.643535355 +0000 UTC m=+4072.673536599" Dec 03 12:01:03 crc kubenswrapper[4756]: I1203 12:01:03.636271 4756 generic.go:334] "Generic (PLEG): container finished" podID="ff18af3a-68ab-4008-9f9d-cb5733c9a529" containerID="580400341ac8f446b555c9d329a1d4da445b86006f705a6929081f02c700c57c" exitCode=0 Dec 03 12:01:03 crc kubenswrapper[4756]: I1203 12:01:03.636335 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29412721-gknsp" event={"ID":"ff18af3a-68ab-4008-9f9d-cb5733c9a529","Type":"ContainerDied","Data":"580400341ac8f446b555c9d329a1d4da445b86006f705a6929081f02c700c57c"} Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.090202 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.263806 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-config-data\") pod \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.264083 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p8lc\" (UniqueName: \"kubernetes.io/projected/ff18af3a-68ab-4008-9f9d-cb5733c9a529-kube-api-access-8p8lc\") pod \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.264159 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-fernet-keys\") pod \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.264275 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-combined-ca-bundle\") pod \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\" (UID: \"ff18af3a-68ab-4008-9f9d-cb5733c9a529\") " Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.284387 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff18af3a-68ab-4008-9f9d-cb5733c9a529-kube-api-access-8p8lc" (OuterVolumeSpecName: "kube-api-access-8p8lc") pod "ff18af3a-68ab-4008-9f9d-cb5733c9a529" (UID: "ff18af3a-68ab-4008-9f9d-cb5733c9a529"). InnerVolumeSpecName "kube-api-access-8p8lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.286430 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ff18af3a-68ab-4008-9f9d-cb5733c9a529" (UID: "ff18af3a-68ab-4008-9f9d-cb5733c9a529"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.304759 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff18af3a-68ab-4008-9f9d-cb5733c9a529" (UID: "ff18af3a-68ab-4008-9f9d-cb5733c9a529"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.328617 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-config-data" (OuterVolumeSpecName: "config-data") pod "ff18af3a-68ab-4008-9f9d-cb5733c9a529" (UID: "ff18af3a-68ab-4008-9f9d-cb5733c9a529"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.366587 4756 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.366645 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p8lc\" (UniqueName: \"kubernetes.io/projected/ff18af3a-68ab-4008-9f9d-cb5733c9a529-kube-api-access-8p8lc\") on node \"crc\" DevicePath \"\"" Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.366670 4756 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.366682 4756 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff18af3a-68ab-4008-9f9d-cb5733c9a529-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.657972 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29412721-gknsp" event={"ID":"ff18af3a-68ab-4008-9f9d-cb5733c9a529","Type":"ContainerDied","Data":"a15c0fd561be32e040009a238c1bb4606b59087be80857aa36b0156a6a8b6bfe"} Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.658299 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a15c0fd561be32e040009a238c1bb4606b59087be80857aa36b0156a6a8b6bfe" Dec 03 12:01:05 crc kubenswrapper[4756]: I1203 12:01:05.658224 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29412721-gknsp" Dec 03 12:01:06 crc kubenswrapper[4756]: I1203 12:01:06.915748 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-9497w_ef21e85d-40d5-4131-af1d-5bc35102ef29/nmstate-console-plugin/0.log" Dec 03 12:01:07 crc kubenswrapper[4756]: I1203 12:01:07.158801 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8f7l7_379d2dc0-2b74-4c9e-936e-160b41e74098/nmstate-handler/0.log" Dec 03 12:01:07 crc kubenswrapper[4756]: I1203 12:01:07.254044 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-znss6_f3c59b4b-7ffa-46bb-a92c-d8ae4218335d/kube-rbac-proxy/0.log" Dec 03 12:01:07 crc kubenswrapper[4756]: I1203 12:01:07.293533 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-znss6_f3c59b4b-7ffa-46bb-a92c-d8ae4218335d/nmstate-metrics/0.log" Dec 03 12:01:07 crc kubenswrapper[4756]: I1203 12:01:07.434085 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-49m5x_3ff12a67-01d3-4dcd-9528-4625113befa2/nmstate-operator/0.log" Dec 03 12:01:07 crc kubenswrapper[4756]: I1203 12:01:07.519693 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-ztzcc_5e29e32a-6823-448d-9af0-1b4aa213a0d2/nmstate-webhook/0.log" Dec 03 12:01:23 crc kubenswrapper[4756]: I1203 12:01:23.224078 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-bq2xc_dbcde851-76fe-4be1-ae50-ed62ebcc75a3/kube-rbac-proxy/0.log" Dec 03 12:01:23 crc kubenswrapper[4756]: I1203 12:01:23.284095 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-bq2xc_dbcde851-76fe-4be1-ae50-ed62ebcc75a3/controller/0.log" Dec 03 12:01:23 crc kubenswrapper[4756]: I1203 12:01:23.444651 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-frr-files/0.log" Dec 03 12:01:23 crc kubenswrapper[4756]: I1203 12:01:23.628721 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-frr-files/0.log" Dec 03 12:01:23 crc kubenswrapper[4756]: I1203 12:01:23.628740 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-metrics/0.log" Dec 03 12:01:23 crc kubenswrapper[4756]: I1203 12:01:23.630336 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-reloader/0.log" Dec 03 12:01:23 crc kubenswrapper[4756]: I1203 12:01:23.659172 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-reloader/0.log" Dec 03 12:01:23 crc kubenswrapper[4756]: I1203 12:01:23.880340 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-frr-files/0.log" Dec 03 12:01:23 crc kubenswrapper[4756]: I1203 12:01:23.915570 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-metrics/0.log" Dec 03 12:01:23 crc kubenswrapper[4756]: I1203 12:01:23.918757 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-reloader/0.log" Dec 03 12:01:23 crc kubenswrapper[4756]: I1203 12:01:23.941278 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-metrics/0.log" Dec 03 12:01:24 crc kubenswrapper[4756]: I1203 12:01:24.097062 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-metrics/0.log" Dec 03 12:01:24 crc kubenswrapper[4756]: I1203 12:01:24.119143 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-frr-files/0.log" Dec 03 12:01:24 crc kubenswrapper[4756]: I1203 12:01:24.134142 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-reloader/0.log" Dec 03 12:01:24 crc kubenswrapper[4756]: I1203 12:01:24.170265 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/controller/0.log" Dec 03 12:01:24 crc kubenswrapper[4756]: I1203 12:01:24.322215 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/frr-metrics/0.log" Dec 03 12:01:24 crc kubenswrapper[4756]: I1203 12:01:24.344699 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/kube-rbac-proxy/0.log" Dec 03 12:01:24 crc kubenswrapper[4756]: I1203 12:01:24.444378 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/kube-rbac-proxy-frr/0.log" Dec 03 12:01:24 crc kubenswrapper[4756]: I1203 12:01:24.524473 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/reloader/0.log" Dec 03 12:01:24 crc kubenswrapper[4756]: I1203 12:01:24.694476 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-v6jcq_4686b721-17d6-4951-9107-81ba7c5f1658/frr-k8s-webhook-server/0.log" Dec 03 12:01:24 crc kubenswrapper[4756]: I1203 12:01:24.922717 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-fdcbcf598-mh4g6_ca73ab13-f16d-40fe-b7a1-f2c5b93e7456/manager/0.log" Dec 03 12:01:25 crc kubenswrapper[4756]: I1203 12:01:25.046340 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6b69d8d987-v5lqb_3a1c84bb-c40c-4af4-80a4-75991c028724/webhook-server/0.log" Dec 03 12:01:25 crc kubenswrapper[4756]: I1203 12:01:25.332546 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7k2pt_dec16985-cd5e-425b-a72d-9a13e835c965/kube-rbac-proxy/0.log" Dec 03 12:01:25 crc kubenswrapper[4756]: I1203 12:01:25.928241 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7k2pt_dec16985-cd5e-425b-a72d-9a13e835c965/speaker/0.log" Dec 03 12:01:25 crc kubenswrapper[4756]: I1203 12:01:25.932937 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/frr/0.log" Dec 03 12:01:40 crc kubenswrapper[4756]: I1203 12:01:40.102376 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb_660ad08f-eaf7-484d-a630-05b51ac13d57/util/0.log" Dec 03 12:01:40 crc kubenswrapper[4756]: I1203 12:01:40.320472 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb_660ad08f-eaf7-484d-a630-05b51ac13d57/util/0.log" Dec 03 12:01:40 crc kubenswrapper[4756]: I1203 12:01:40.330513 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb_660ad08f-eaf7-484d-a630-05b51ac13d57/pull/0.log" Dec 03 12:01:40 crc kubenswrapper[4756]: I1203 12:01:40.383437 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb_660ad08f-eaf7-484d-a630-05b51ac13d57/pull/0.log" Dec 03 12:01:40 crc kubenswrapper[4756]: I1203 12:01:40.537636 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb_660ad08f-eaf7-484d-a630-05b51ac13d57/extract/0.log" Dec 03 12:01:40 crc kubenswrapper[4756]: I1203 12:01:40.556500 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb_660ad08f-eaf7-484d-a630-05b51ac13d57/pull/0.log" Dec 03 12:01:40 crc kubenswrapper[4756]: I1203 12:01:40.584702 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb_660ad08f-eaf7-484d-a630-05b51ac13d57/util/0.log" Dec 03 12:01:40 crc kubenswrapper[4756]: I1203 12:01:40.804213 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84_c8b2a0d1-86e1-4b61-92df-c41e299a3f58/util/0.log" Dec 03 12:01:40 crc kubenswrapper[4756]: I1203 12:01:40.980971 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84_c8b2a0d1-86e1-4b61-92df-c41e299a3f58/util/0.log" Dec 03 12:01:40 crc kubenswrapper[4756]: I1203 12:01:40.984471 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84_c8b2a0d1-86e1-4b61-92df-c41e299a3f58/pull/0.log" Dec 03 12:01:41 crc kubenswrapper[4756]: I1203 12:01:41.022185 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84_c8b2a0d1-86e1-4b61-92df-c41e299a3f58/pull/0.log" Dec 03 12:01:41 crc kubenswrapper[4756]: I1203 12:01:41.195923 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84_c8b2a0d1-86e1-4b61-92df-c41e299a3f58/util/0.log" Dec 03 12:01:41 crc kubenswrapper[4756]: I1203 12:01:41.209382 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84_c8b2a0d1-86e1-4b61-92df-c41e299a3f58/pull/0.log" Dec 03 12:01:41 crc kubenswrapper[4756]: I1203 12:01:41.258029 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84_c8b2a0d1-86e1-4b61-92df-c41e299a3f58/extract/0.log" Dec 03 12:01:41 crc kubenswrapper[4756]: I1203 12:01:41.431391 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ssjdj_3609b14b-ffbe-45d5-818d-d6a01bf0b5d1/extract-utilities/0.log" Dec 03 12:01:41 crc kubenswrapper[4756]: I1203 12:01:41.583223 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ssjdj_3609b14b-ffbe-45d5-818d-d6a01bf0b5d1/extract-content/0.log" Dec 03 12:01:41 crc kubenswrapper[4756]: I1203 12:01:41.611540 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ssjdj_3609b14b-ffbe-45d5-818d-d6a01bf0b5d1/extract-utilities/0.log" Dec 03 12:01:41 crc kubenswrapper[4756]: I1203 12:01:41.632489 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ssjdj_3609b14b-ffbe-45d5-818d-d6a01bf0b5d1/extract-content/0.log" Dec 03 12:01:41 crc kubenswrapper[4756]: I1203 12:01:41.796423 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ssjdj_3609b14b-ffbe-45d5-818d-d6a01bf0b5d1/extract-utilities/0.log" Dec 03 12:01:41 crc kubenswrapper[4756]: I1203 12:01:41.809610 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ssjdj_3609b14b-ffbe-45d5-818d-d6a01bf0b5d1/extract-content/0.log" Dec 03 12:01:42 crc kubenswrapper[4756]: I1203 12:01:42.059136 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t6jxs_6cb28c3f-5e91-4b53-8eb9-7878c29595a2/extract-utilities/0.log" Dec 03 12:01:42 crc kubenswrapper[4756]: I1203 12:01:42.141694 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ssjdj_3609b14b-ffbe-45d5-818d-d6a01bf0b5d1/registry-server/0.log" Dec 03 12:01:42 crc kubenswrapper[4756]: I1203 12:01:42.222624 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t6jxs_6cb28c3f-5e91-4b53-8eb9-7878c29595a2/extract-content/0.log" Dec 03 12:01:42 crc kubenswrapper[4756]: I1203 12:01:42.222771 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t6jxs_6cb28c3f-5e91-4b53-8eb9-7878c29595a2/extract-utilities/0.log" Dec 03 12:01:42 crc kubenswrapper[4756]: I1203 12:01:42.294481 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t6jxs_6cb28c3f-5e91-4b53-8eb9-7878c29595a2/extract-content/0.log" Dec 03 12:01:42 crc kubenswrapper[4756]: I1203 12:01:42.521203 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t6jxs_6cb28c3f-5e91-4b53-8eb9-7878c29595a2/extract-utilities/0.log" Dec 03 12:01:42 crc kubenswrapper[4756]: I1203 12:01:42.556599 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t6jxs_6cb28c3f-5e91-4b53-8eb9-7878c29595a2/extract-content/0.log" Dec 03 12:01:42 crc kubenswrapper[4756]: I1203 12:01:42.749391 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-r52tb_7f5dea91-6dce-4093-a943-05e3359b754d/marketplace-operator/0.log" Dec 03 12:01:42 crc kubenswrapper[4756]: I1203 12:01:42.939136 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d88tf_98d8f022-96b9-4992-847a-52bf83ddb778/extract-utilities/0.log" Dec 03 12:01:43 crc kubenswrapper[4756]: I1203 12:01:43.324542 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t6jxs_6cb28c3f-5e91-4b53-8eb9-7878c29595a2/registry-server/0.log" Dec 03 12:01:43 crc kubenswrapper[4756]: I1203 12:01:43.633494 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d88tf_98d8f022-96b9-4992-847a-52bf83ddb778/extract-utilities/0.log" Dec 03 12:01:43 crc kubenswrapper[4756]: I1203 12:01:43.876676 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d88tf_98d8f022-96b9-4992-847a-52bf83ddb778/extract-content/0.log" Dec 03 12:01:43 crc kubenswrapper[4756]: I1203 12:01:43.891177 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d88tf_98d8f022-96b9-4992-847a-52bf83ddb778/extract-content/0.log" Dec 03 12:01:44 crc kubenswrapper[4756]: I1203 12:01:44.077583 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d88tf_98d8f022-96b9-4992-847a-52bf83ddb778/extract-utilities/0.log" Dec 03 12:01:44 crc kubenswrapper[4756]: I1203 12:01:44.085963 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d88tf_98d8f022-96b9-4992-847a-52bf83ddb778/extract-content/0.log" Dec 03 12:01:44 crc kubenswrapper[4756]: I1203 12:01:44.124110 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rncfc_7dad8b84-7bcf-411d-87e7-f91db9494b86/extract-utilities/0.log" Dec 03 12:01:44 crc kubenswrapper[4756]: I1203 12:01:44.338512 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rncfc_7dad8b84-7bcf-411d-87e7-f91db9494b86/extract-utilities/0.log" Dec 03 12:01:44 crc kubenswrapper[4756]: I1203 12:01:44.367044 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rncfc_7dad8b84-7bcf-411d-87e7-f91db9494b86/extract-content/0.log" Dec 03 12:01:44 crc kubenswrapper[4756]: I1203 12:01:44.374448 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rncfc_7dad8b84-7bcf-411d-87e7-f91db9494b86/extract-content/0.log" Dec 03 12:01:44 crc kubenswrapper[4756]: I1203 12:01:44.531203 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rncfc_7dad8b84-7bcf-411d-87e7-f91db9494b86/extract-utilities/0.log" Dec 03 12:01:44 crc kubenswrapper[4756]: I1203 12:01:44.571123 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rncfc_7dad8b84-7bcf-411d-87e7-f91db9494b86/extract-content/0.log" Dec 03 12:01:46 crc kubenswrapper[4756]: I1203 12:01:46.240324 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d88tf_98d8f022-96b9-4992-847a-52bf83ddb778/registry-server/0.log" Dec 03 12:01:46 crc kubenswrapper[4756]: I1203 12:01:46.396479 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rncfc_7dad8b84-7bcf-411d-87e7-f91db9494b86/registry-server/0.log" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.654014 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2sb7x"] Dec 03 12:02:00 crc kubenswrapper[4756]: E1203 12:02:00.655301 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff18af3a-68ab-4008-9f9d-cb5733c9a529" containerName="keystone-cron" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.655316 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff18af3a-68ab-4008-9f9d-cb5733c9a529" containerName="keystone-cron" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.655691 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff18af3a-68ab-4008-9f9d-cb5733c9a529" containerName="keystone-cron" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.657997 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.680873 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2sb7x"] Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.820850 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a9d568-388c-42f0-a1a9-9deb135e0f93-catalog-content\") pod \"certified-operators-2sb7x\" (UID: \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\") " pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.821011 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd55v\" (UniqueName: \"kubernetes.io/projected/f5a9d568-388c-42f0-a1a9-9deb135e0f93-kube-api-access-bd55v\") pod \"certified-operators-2sb7x\" (UID: \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\") " pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.821099 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a9d568-388c-42f0-a1a9-9deb135e0f93-utilities\") pod \"certified-operators-2sb7x\" (UID: \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\") " pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.832264 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-88b9w"] Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.834392 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.842333 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-88b9w"] Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.923395 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a9d568-388c-42f0-a1a9-9deb135e0f93-utilities\") pod \"certified-operators-2sb7x\" (UID: \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\") " pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.923455 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a9d568-388c-42f0-a1a9-9deb135e0f93-catalog-content\") pod \"certified-operators-2sb7x\" (UID: \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\") " pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.923573 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd55v\" (UniqueName: \"kubernetes.io/projected/f5a9d568-388c-42f0-a1a9-9deb135e0f93-kube-api-access-bd55v\") pod \"certified-operators-2sb7x\" (UID: \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\") " pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.924006 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a9d568-388c-42f0-a1a9-9deb135e0f93-utilities\") pod \"certified-operators-2sb7x\" (UID: \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\") " pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.924070 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a9d568-388c-42f0-a1a9-9deb135e0f93-catalog-content\") pod \"certified-operators-2sb7x\" (UID: \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\") " pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.946757 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd55v\" (UniqueName: \"kubernetes.io/projected/f5a9d568-388c-42f0-a1a9-9deb135e0f93-kube-api-access-bd55v\") pod \"certified-operators-2sb7x\" (UID: \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\") " pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:00 crc kubenswrapper[4756]: I1203 12:02:00.999082 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:01 crc kubenswrapper[4756]: I1203 12:02:01.030204 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7adc085-6f10-4200-ac65-b9110b21d38a-catalog-content\") pod \"community-operators-88b9w\" (UID: \"c7adc085-6f10-4200-ac65-b9110b21d38a\") " pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:01 crc kubenswrapper[4756]: I1203 12:02:01.030641 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9br9\" (UniqueName: \"kubernetes.io/projected/c7adc085-6f10-4200-ac65-b9110b21d38a-kube-api-access-z9br9\") pod \"community-operators-88b9w\" (UID: \"c7adc085-6f10-4200-ac65-b9110b21d38a\") " pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:01 crc kubenswrapper[4756]: I1203 12:02:01.030663 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7adc085-6f10-4200-ac65-b9110b21d38a-utilities\") pod \"community-operators-88b9w\" (UID: \"c7adc085-6f10-4200-ac65-b9110b21d38a\") " pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:01 crc kubenswrapper[4756]: I1203 12:02:01.136732 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7adc085-6f10-4200-ac65-b9110b21d38a-catalog-content\") pod \"community-operators-88b9w\" (UID: \"c7adc085-6f10-4200-ac65-b9110b21d38a\") " pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:01 crc kubenswrapper[4756]: I1203 12:02:01.136854 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9br9\" (UniqueName: \"kubernetes.io/projected/c7adc085-6f10-4200-ac65-b9110b21d38a-kube-api-access-z9br9\") pod \"community-operators-88b9w\" (UID: \"c7adc085-6f10-4200-ac65-b9110b21d38a\") " pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:01 crc kubenswrapper[4756]: I1203 12:02:01.136884 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7adc085-6f10-4200-ac65-b9110b21d38a-utilities\") pod \"community-operators-88b9w\" (UID: \"c7adc085-6f10-4200-ac65-b9110b21d38a\") " pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:01 crc kubenswrapper[4756]: I1203 12:02:01.137467 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7adc085-6f10-4200-ac65-b9110b21d38a-utilities\") pod \"community-operators-88b9w\" (UID: \"c7adc085-6f10-4200-ac65-b9110b21d38a\") " pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:01 crc kubenswrapper[4756]: I1203 12:02:01.137657 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7adc085-6f10-4200-ac65-b9110b21d38a-catalog-content\") pod \"community-operators-88b9w\" (UID: \"c7adc085-6f10-4200-ac65-b9110b21d38a\") " pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:01 crc kubenswrapper[4756]: I1203 12:02:01.175058 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9br9\" (UniqueName: \"kubernetes.io/projected/c7adc085-6f10-4200-ac65-b9110b21d38a-kube-api-access-z9br9\") pod \"community-operators-88b9w\" (UID: \"c7adc085-6f10-4200-ac65-b9110b21d38a\") " pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:01 crc kubenswrapper[4756]: I1203 12:02:01.178418 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:01 crc kubenswrapper[4756]: I1203 12:02:01.456852 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2sb7x"] Dec 03 12:02:02 crc kubenswrapper[4756]: I1203 12:02:02.006761 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-88b9w"] Dec 03 12:02:02 crc kubenswrapper[4756]: I1203 12:02:02.251655 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b9w" event={"ID":"c7adc085-6f10-4200-ac65-b9110b21d38a","Type":"ContainerStarted","Data":"68e3ec129d54137df9592f28821221c0a7c69486fbaca0468e7b6cad1170c40a"} Dec 03 12:02:02 crc kubenswrapper[4756]: I1203 12:02:02.252049 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b9w" event={"ID":"c7adc085-6f10-4200-ac65-b9110b21d38a","Type":"ContainerStarted","Data":"8896ade24737a37463d57738a53158931b1d77293c4c82ae072f6f99e630a4e1"} Dec 03 12:02:02 crc kubenswrapper[4756]: I1203 12:02:02.257607 4756 generic.go:334] "Generic (PLEG): container finished" podID="f5a9d568-388c-42f0-a1a9-9deb135e0f93" containerID="01c044ef3cfe9a9d0f059bb65aef2d5225d94d7645bef5039171003c764472b0" exitCode=0 Dec 03 12:02:02 crc kubenswrapper[4756]: I1203 12:02:02.257664 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sb7x" event={"ID":"f5a9d568-388c-42f0-a1a9-9deb135e0f93","Type":"ContainerDied","Data":"01c044ef3cfe9a9d0f059bb65aef2d5225d94d7645bef5039171003c764472b0"} Dec 03 12:02:02 crc kubenswrapper[4756]: I1203 12:02:02.257689 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sb7x" event={"ID":"f5a9d568-388c-42f0-a1a9-9deb135e0f93","Type":"ContainerStarted","Data":"a064521cd3978b3088368610e5e2fb43ce45a46051d364012c2c0fe3ae2cdf82"} Dec 03 12:02:03 crc kubenswrapper[4756]: I1203 12:02:03.284622 4756 generic.go:334] "Generic (PLEG): container finished" podID="c7adc085-6f10-4200-ac65-b9110b21d38a" containerID="68e3ec129d54137df9592f28821221c0a7c69486fbaca0468e7b6cad1170c40a" exitCode=0 Dec 03 12:02:03 crc kubenswrapper[4756]: I1203 12:02:03.284965 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b9w" event={"ID":"c7adc085-6f10-4200-ac65-b9110b21d38a","Type":"ContainerDied","Data":"68e3ec129d54137df9592f28821221c0a7c69486fbaca0468e7b6cad1170c40a"} Dec 03 12:02:03 crc kubenswrapper[4756]: I1203 12:02:03.826678 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zb9l8"] Dec 03 12:02:03 crc kubenswrapper[4756]: I1203 12:02:03.830244 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:03 crc kubenswrapper[4756]: I1203 12:02:03.843547 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zb9l8"] Dec 03 12:02:03 crc kubenswrapper[4756]: I1203 12:02:03.919475 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5dd2b5-591d-4832-992a-41525a3a65fd-catalog-content\") pod \"redhat-marketplace-zb9l8\" (UID: \"ff5dd2b5-591d-4832-992a-41525a3a65fd\") " pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:03 crc kubenswrapper[4756]: I1203 12:02:03.919519 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddxtx\" (UniqueName: \"kubernetes.io/projected/ff5dd2b5-591d-4832-992a-41525a3a65fd-kube-api-access-ddxtx\") pod \"redhat-marketplace-zb9l8\" (UID: \"ff5dd2b5-591d-4832-992a-41525a3a65fd\") " pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:03 crc kubenswrapper[4756]: I1203 12:02:03.919592 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5dd2b5-591d-4832-992a-41525a3a65fd-utilities\") pod \"redhat-marketplace-zb9l8\" (UID: \"ff5dd2b5-591d-4832-992a-41525a3a65fd\") " pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:04 crc kubenswrapper[4756]: I1203 12:02:04.022561 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5dd2b5-591d-4832-992a-41525a3a65fd-catalog-content\") pod \"redhat-marketplace-zb9l8\" (UID: \"ff5dd2b5-591d-4832-992a-41525a3a65fd\") " pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:04 crc kubenswrapper[4756]: I1203 12:02:04.022643 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddxtx\" (UniqueName: \"kubernetes.io/projected/ff5dd2b5-591d-4832-992a-41525a3a65fd-kube-api-access-ddxtx\") pod \"redhat-marketplace-zb9l8\" (UID: \"ff5dd2b5-591d-4832-992a-41525a3a65fd\") " pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:04 crc kubenswrapper[4756]: I1203 12:02:04.022733 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5dd2b5-591d-4832-992a-41525a3a65fd-utilities\") pod \"redhat-marketplace-zb9l8\" (UID: \"ff5dd2b5-591d-4832-992a-41525a3a65fd\") " pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:04 crc kubenswrapper[4756]: I1203 12:02:04.023503 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5dd2b5-591d-4832-992a-41525a3a65fd-utilities\") pod \"redhat-marketplace-zb9l8\" (UID: \"ff5dd2b5-591d-4832-992a-41525a3a65fd\") " pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:04 crc kubenswrapper[4756]: I1203 12:02:04.023626 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5dd2b5-591d-4832-992a-41525a3a65fd-catalog-content\") pod \"redhat-marketplace-zb9l8\" (UID: \"ff5dd2b5-591d-4832-992a-41525a3a65fd\") " pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:04 crc kubenswrapper[4756]: I1203 12:02:04.062091 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddxtx\" (UniqueName: \"kubernetes.io/projected/ff5dd2b5-591d-4832-992a-41525a3a65fd-kube-api-access-ddxtx\") pod \"redhat-marketplace-zb9l8\" (UID: \"ff5dd2b5-591d-4832-992a-41525a3a65fd\") " pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:04 crc kubenswrapper[4756]: I1203 12:02:04.152552 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:04 crc kubenswrapper[4756]: I1203 12:02:04.317474 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sb7x" event={"ID":"f5a9d568-388c-42f0-a1a9-9deb135e0f93","Type":"ContainerStarted","Data":"fc4e761d247ae810d08a547438a9e3c22538651660624ce3b24173fbeee99595"} Dec 03 12:02:04 crc kubenswrapper[4756]: I1203 12:02:04.724692 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zb9l8"] Dec 03 12:02:05 crc kubenswrapper[4756]: I1203 12:02:05.329101 4756 generic.go:334] "Generic (PLEG): container finished" podID="ff5dd2b5-591d-4832-992a-41525a3a65fd" containerID="3d9d13a378d4a8f6718b9203f282bfa5c0b50256dc29f687702271f112618fb5" exitCode=0 Dec 03 12:02:05 crc kubenswrapper[4756]: I1203 12:02:05.329164 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zb9l8" event={"ID":"ff5dd2b5-591d-4832-992a-41525a3a65fd","Type":"ContainerDied","Data":"3d9d13a378d4a8f6718b9203f282bfa5c0b50256dc29f687702271f112618fb5"} Dec 03 12:02:05 crc kubenswrapper[4756]: I1203 12:02:05.329427 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zb9l8" event={"ID":"ff5dd2b5-591d-4832-992a-41525a3a65fd","Type":"ContainerStarted","Data":"f0c9b05d8ea60c00054714b6f2b2bc7156f9b5ccb6f699f6d3ed4ce4ef55bd94"} Dec 03 12:02:05 crc kubenswrapper[4756]: I1203 12:02:05.331634 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b9w" event={"ID":"c7adc085-6f10-4200-ac65-b9110b21d38a","Type":"ContainerStarted","Data":"f2ae4238e33a39d35f46e263cc499924ac715102f558e96997197a7cdeab2946"} Dec 03 12:02:06 crc kubenswrapper[4756]: I1203 12:02:06.344522 4756 generic.go:334] "Generic (PLEG): container finished" podID="c7adc085-6f10-4200-ac65-b9110b21d38a" containerID="f2ae4238e33a39d35f46e263cc499924ac715102f558e96997197a7cdeab2946" exitCode=0 Dec 03 12:02:06 crc kubenswrapper[4756]: I1203 12:02:06.344622 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b9w" event={"ID":"c7adc085-6f10-4200-ac65-b9110b21d38a","Type":"ContainerDied","Data":"f2ae4238e33a39d35f46e263cc499924ac715102f558e96997197a7cdeab2946"} Dec 03 12:02:07 crc kubenswrapper[4756]: I1203 12:02:07.354777 4756 generic.go:334] "Generic (PLEG): container finished" podID="f5a9d568-388c-42f0-a1a9-9deb135e0f93" containerID="fc4e761d247ae810d08a547438a9e3c22538651660624ce3b24173fbeee99595" exitCode=0 Dec 03 12:02:07 crc kubenswrapper[4756]: I1203 12:02:07.354860 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sb7x" event={"ID":"f5a9d568-388c-42f0-a1a9-9deb135e0f93","Type":"ContainerDied","Data":"fc4e761d247ae810d08a547438a9e3c22538651660624ce3b24173fbeee99595"} Dec 03 12:02:09 crc kubenswrapper[4756]: I1203 12:02:09.388656 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b9w" event={"ID":"c7adc085-6f10-4200-ac65-b9110b21d38a","Type":"ContainerStarted","Data":"203a704cc73734fdaea097a56535bcaf3717118a433a4b5e904d441c9838c051"} Dec 03 12:02:09 crc kubenswrapper[4756]: I1203 12:02:09.392612 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sb7x" event={"ID":"f5a9d568-388c-42f0-a1a9-9deb135e0f93","Type":"ContainerStarted","Data":"f310cfbd4ca422da80e884cf41b7901b6fe797fa41aaa394a9b86b2cec6f6663"} Dec 03 12:02:09 crc kubenswrapper[4756]: I1203 12:02:09.396505 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zb9l8" event={"ID":"ff5dd2b5-591d-4832-992a-41525a3a65fd","Type":"ContainerStarted","Data":"8c73ec0ab8b7468b0b3b73828f43f450c889e4b62c4470d23cceedf5b4e625fd"} Dec 03 12:02:09 crc kubenswrapper[4756]: I1203 12:02:09.414139 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-88b9w" podStartSLOduration=4.656582818 podStartE2EDuration="9.414115692s" podCreationTimestamp="2025-12-03 12:02:00 +0000 UTC" firstStartedPulling="2025-12-03 12:02:03.289116452 +0000 UTC m=+4134.319117696" lastFinishedPulling="2025-12-03 12:02:08.046649326 +0000 UTC m=+4139.076650570" observedRunningTime="2025-12-03 12:02:09.413865704 +0000 UTC m=+4140.443866948" watchObservedRunningTime="2025-12-03 12:02:09.414115692 +0000 UTC m=+4140.444116936" Dec 03 12:02:09 crc kubenswrapper[4756]: I1203 12:02:09.453985 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2sb7x" podStartSLOduration=3.5300927619999998 podStartE2EDuration="9.453941183s" podCreationTimestamp="2025-12-03 12:02:00 +0000 UTC" firstStartedPulling="2025-12-03 12:02:02.261215233 +0000 UTC m=+4133.291216477" lastFinishedPulling="2025-12-03 12:02:08.185063654 +0000 UTC m=+4139.215064898" observedRunningTime="2025-12-03 12:02:09.441336077 +0000 UTC m=+4140.471337321" watchObservedRunningTime="2025-12-03 12:02:09.453941183 +0000 UTC m=+4140.483942427" Dec 03 12:02:09 crc kubenswrapper[4756]: E1203 12:02:09.539878 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff5dd2b5_591d_4832_992a_41525a3a65fd.slice/crio-8c73ec0ab8b7468b0b3b73828f43f450c889e4b62c4470d23cceedf5b4e625fd.scope\": RecentStats: unable to find data in memory cache]" Dec 03 12:02:10 crc kubenswrapper[4756]: I1203 12:02:10.411036 4756 generic.go:334] "Generic (PLEG): container finished" podID="ff5dd2b5-591d-4832-992a-41525a3a65fd" containerID="8c73ec0ab8b7468b0b3b73828f43f450c889e4b62c4470d23cceedf5b4e625fd" exitCode=0 Dec 03 12:02:10 crc kubenswrapper[4756]: I1203 12:02:10.411151 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zb9l8" event={"ID":"ff5dd2b5-591d-4832-992a-41525a3a65fd","Type":"ContainerDied","Data":"8c73ec0ab8b7468b0b3b73828f43f450c889e4b62c4470d23cceedf5b4e625fd"} Dec 03 12:02:10 crc kubenswrapper[4756]: I1203 12:02:10.413855 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 12:02:11 crc kubenswrapper[4756]: I1203 12:02:10.999794 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:11 crc kubenswrapper[4756]: I1203 12:02:11.000239 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:11 crc kubenswrapper[4756]: I1203 12:02:11.180057 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:11 crc kubenswrapper[4756]: I1203 12:02:11.180177 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:11 crc kubenswrapper[4756]: I1203 12:02:11.469657 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zb9l8" event={"ID":"ff5dd2b5-591d-4832-992a-41525a3a65fd","Type":"ContainerStarted","Data":"9f32ce50ff87d3c35cf2100865b114b39e72f4f416e311be25eb53f3abb30bae"} Dec 03 12:02:11 crc kubenswrapper[4756]: I1203 12:02:11.498460 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zb9l8" podStartSLOduration=2.743892683 podStartE2EDuration="8.498436866s" podCreationTimestamp="2025-12-03 12:02:03 +0000 UTC" firstStartedPulling="2025-12-03 12:02:05.330834137 +0000 UTC m=+4136.360835381" lastFinishedPulling="2025-12-03 12:02:11.08537832 +0000 UTC m=+4142.115379564" observedRunningTime="2025-12-03 12:02:11.487592934 +0000 UTC m=+4142.517594188" watchObservedRunningTime="2025-12-03 12:02:11.498436866 +0000 UTC m=+4142.528438120" Dec 03 12:02:12 crc kubenswrapper[4756]: I1203 12:02:12.093529 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-2sb7x" podUID="f5a9d568-388c-42f0-a1a9-9deb135e0f93" containerName="registry-server" probeResult="failure" output=< Dec 03 12:02:12 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Dec 03 12:02:12 crc kubenswrapper[4756]: > Dec 03 12:02:12 crc kubenswrapper[4756]: I1203 12:02:12.257107 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-88b9w" podUID="c7adc085-6f10-4200-ac65-b9110b21d38a" containerName="registry-server" probeResult="failure" output=< Dec 03 12:02:12 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Dec 03 12:02:12 crc kubenswrapper[4756]: > Dec 03 12:02:14 crc kubenswrapper[4756]: I1203 12:02:14.154259 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:14 crc kubenswrapper[4756]: I1203 12:02:14.154602 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:14 crc kubenswrapper[4756]: I1203 12:02:14.226108 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:21 crc kubenswrapper[4756]: I1203 12:02:21.070074 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:21 crc kubenswrapper[4756]: I1203 12:02:21.142781 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:21 crc kubenswrapper[4756]: I1203 12:02:21.231422 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:21 crc kubenswrapper[4756]: I1203 12:02:21.288814 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:21 crc kubenswrapper[4756]: I1203 12:02:21.328368 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2sb7x"] Dec 03 12:02:22 crc kubenswrapper[4756]: I1203 12:02:22.585476 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2sb7x" podUID="f5a9d568-388c-42f0-a1a9-9deb135e0f93" containerName="registry-server" containerID="cri-o://f310cfbd4ca422da80e884cf41b7901b6fe797fa41aaa394a9b86b2cec6f6663" gracePeriod=2 Dec 03 12:02:22 crc kubenswrapper[4756]: I1203 12:02:22.607706 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 12:02:22 crc kubenswrapper[4756]: I1203 12:02:22.608063 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.136183 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.286837 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a9d568-388c-42f0-a1a9-9deb135e0f93-catalog-content\") pod \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\" (UID: \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\") " Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.287103 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd55v\" (UniqueName: \"kubernetes.io/projected/f5a9d568-388c-42f0-a1a9-9deb135e0f93-kube-api-access-bd55v\") pod \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\" (UID: \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\") " Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.287314 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a9d568-388c-42f0-a1a9-9deb135e0f93-utilities\") pod \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\" (UID: \"f5a9d568-388c-42f0-a1a9-9deb135e0f93\") " Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.288256 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5a9d568-388c-42f0-a1a9-9deb135e0f93-utilities" (OuterVolumeSpecName: "utilities") pod "f5a9d568-388c-42f0-a1a9-9deb135e0f93" (UID: "f5a9d568-388c-42f0-a1a9-9deb135e0f93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.309449 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5a9d568-388c-42f0-a1a9-9deb135e0f93-kube-api-access-bd55v" (OuterVolumeSpecName: "kube-api-access-bd55v") pod "f5a9d568-388c-42f0-a1a9-9deb135e0f93" (UID: "f5a9d568-388c-42f0-a1a9-9deb135e0f93"). InnerVolumeSpecName "kube-api-access-bd55v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.347717 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5a9d568-388c-42f0-a1a9-9deb135e0f93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5a9d568-388c-42f0-a1a9-9deb135e0f93" (UID: "f5a9d568-388c-42f0-a1a9-9deb135e0f93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.390665 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd55v\" (UniqueName: \"kubernetes.io/projected/f5a9d568-388c-42f0-a1a9-9deb135e0f93-kube-api-access-bd55v\") on node \"crc\" DevicePath \"\"" Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.390722 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a9d568-388c-42f0-a1a9-9deb135e0f93-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.390740 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a9d568-388c-42f0-a1a9-9deb135e0f93-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.519193 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-88b9w"] Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.519484 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-88b9w" podUID="c7adc085-6f10-4200-ac65-b9110b21d38a" containerName="registry-server" containerID="cri-o://203a704cc73734fdaea097a56535bcaf3717118a433a4b5e904d441c9838c051" gracePeriod=2 Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.610736 4756 generic.go:334] "Generic (PLEG): container finished" podID="f5a9d568-388c-42f0-a1a9-9deb135e0f93" containerID="f310cfbd4ca422da80e884cf41b7901b6fe797fa41aaa394a9b86b2cec6f6663" exitCode=0 Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.610899 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sb7x" Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.610938 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sb7x" event={"ID":"f5a9d568-388c-42f0-a1a9-9deb135e0f93","Type":"ContainerDied","Data":"f310cfbd4ca422da80e884cf41b7901b6fe797fa41aaa394a9b86b2cec6f6663"} Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.616264 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sb7x" event={"ID":"f5a9d568-388c-42f0-a1a9-9deb135e0f93","Type":"ContainerDied","Data":"a064521cd3978b3088368610e5e2fb43ce45a46051d364012c2c0fe3ae2cdf82"} Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.616324 4756 scope.go:117] "RemoveContainer" containerID="f310cfbd4ca422da80e884cf41b7901b6fe797fa41aaa394a9b86b2cec6f6663" Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.714665 4756 scope.go:117] "RemoveContainer" containerID="fc4e761d247ae810d08a547438a9e3c22538651660624ce3b24173fbeee99595" Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.718575 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2sb7x"] Dec 03 12:02:23 crc kubenswrapper[4756]: I1203 12:02:23.730847 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2sb7x"] Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.330296 4756 scope.go:117] "RemoveContainer" containerID="01c044ef3cfe9a9d0f059bb65aef2d5225d94d7645bef5039171003c764472b0" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.334025 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.445396 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.448994 4756 scope.go:117] "RemoveContainer" containerID="f310cfbd4ca422da80e884cf41b7901b6fe797fa41aaa394a9b86b2cec6f6663" Dec 03 12:02:24 crc kubenswrapper[4756]: E1203 12:02:24.449482 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f310cfbd4ca422da80e884cf41b7901b6fe797fa41aaa394a9b86b2cec6f6663\": container with ID starting with f310cfbd4ca422da80e884cf41b7901b6fe797fa41aaa394a9b86b2cec6f6663 not found: ID does not exist" containerID="f310cfbd4ca422da80e884cf41b7901b6fe797fa41aaa394a9b86b2cec6f6663" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.449554 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f310cfbd4ca422da80e884cf41b7901b6fe797fa41aaa394a9b86b2cec6f6663"} err="failed to get container status \"f310cfbd4ca422da80e884cf41b7901b6fe797fa41aaa394a9b86b2cec6f6663\": rpc error: code = NotFound desc = could not find container \"f310cfbd4ca422da80e884cf41b7901b6fe797fa41aaa394a9b86b2cec6f6663\": container with ID starting with f310cfbd4ca422da80e884cf41b7901b6fe797fa41aaa394a9b86b2cec6f6663 not found: ID does not exist" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.449619 4756 scope.go:117] "RemoveContainer" containerID="fc4e761d247ae810d08a547438a9e3c22538651660624ce3b24173fbeee99595" Dec 03 12:02:24 crc kubenswrapper[4756]: E1203 12:02:24.450224 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4e761d247ae810d08a547438a9e3c22538651660624ce3b24173fbeee99595\": container with ID starting with fc4e761d247ae810d08a547438a9e3c22538651660624ce3b24173fbeee99595 not found: ID does not exist" containerID="fc4e761d247ae810d08a547438a9e3c22538651660624ce3b24173fbeee99595" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.450273 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4e761d247ae810d08a547438a9e3c22538651660624ce3b24173fbeee99595"} err="failed to get container status \"fc4e761d247ae810d08a547438a9e3c22538651660624ce3b24173fbeee99595\": rpc error: code = NotFound desc = could not find container \"fc4e761d247ae810d08a547438a9e3c22538651660624ce3b24173fbeee99595\": container with ID starting with fc4e761d247ae810d08a547438a9e3c22538651660624ce3b24173fbeee99595 not found: ID does not exist" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.450302 4756 scope.go:117] "RemoveContainer" containerID="01c044ef3cfe9a9d0f059bb65aef2d5225d94d7645bef5039171003c764472b0" Dec 03 12:02:24 crc kubenswrapper[4756]: E1203 12:02:24.450621 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01c044ef3cfe9a9d0f059bb65aef2d5225d94d7645bef5039171003c764472b0\": container with ID starting with 01c044ef3cfe9a9d0f059bb65aef2d5225d94d7645bef5039171003c764472b0 not found: ID does not exist" containerID="01c044ef3cfe9a9d0f059bb65aef2d5225d94d7645bef5039171003c764472b0" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.450670 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c044ef3cfe9a9d0f059bb65aef2d5225d94d7645bef5039171003c764472b0"} err="failed to get container status \"01c044ef3cfe9a9d0f059bb65aef2d5225d94d7645bef5039171003c764472b0\": rpc error: code = NotFound desc = could not find container \"01c044ef3cfe9a9d0f059bb65aef2d5225d94d7645bef5039171003c764472b0\": container with ID starting with 01c044ef3cfe9a9d0f059bb65aef2d5225d94d7645bef5039171003c764472b0 not found: ID does not exist" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.625731 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7adc085-6f10-4200-ac65-b9110b21d38a-catalog-content\") pod \"c7adc085-6f10-4200-ac65-b9110b21d38a\" (UID: \"c7adc085-6f10-4200-ac65-b9110b21d38a\") " Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.626340 4756 generic.go:334] "Generic (PLEG): container finished" podID="c7adc085-6f10-4200-ac65-b9110b21d38a" containerID="203a704cc73734fdaea097a56535bcaf3717118a433a4b5e904d441c9838c051" exitCode=0 Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.626368 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b9w" event={"ID":"c7adc085-6f10-4200-ac65-b9110b21d38a","Type":"ContainerDied","Data":"203a704cc73734fdaea097a56535bcaf3717118a433a4b5e904d441c9838c051"} Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.648965 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88b9w" event={"ID":"c7adc085-6f10-4200-ac65-b9110b21d38a","Type":"ContainerDied","Data":"8896ade24737a37463d57738a53158931b1d77293c4c82ae072f6f99e630a4e1"} Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.649015 4756 scope.go:117] "RemoveContainer" containerID="203a704cc73734fdaea097a56535bcaf3717118a433a4b5e904d441c9838c051" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.649111 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7adc085-6f10-4200-ac65-b9110b21d38a-utilities\") pod \"c7adc085-6f10-4200-ac65-b9110b21d38a\" (UID: \"c7adc085-6f10-4200-ac65-b9110b21d38a\") " Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.649188 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9br9\" (UniqueName: \"kubernetes.io/projected/c7adc085-6f10-4200-ac65-b9110b21d38a-kube-api-access-z9br9\") pod \"c7adc085-6f10-4200-ac65-b9110b21d38a\" (UID: \"c7adc085-6f10-4200-ac65-b9110b21d38a\") " Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.626436 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88b9w" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.656464 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7adc085-6f10-4200-ac65-b9110b21d38a-utilities" (OuterVolumeSpecName: "utilities") pod "c7adc085-6f10-4200-ac65-b9110b21d38a" (UID: "c7adc085-6f10-4200-ac65-b9110b21d38a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.674245 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7adc085-6f10-4200-ac65-b9110b21d38a-kube-api-access-z9br9" (OuterVolumeSpecName: "kube-api-access-z9br9") pod "c7adc085-6f10-4200-ac65-b9110b21d38a" (UID: "c7adc085-6f10-4200-ac65-b9110b21d38a"). InnerVolumeSpecName "kube-api-access-z9br9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.712023 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7adc085-6f10-4200-ac65-b9110b21d38a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7adc085-6f10-4200-ac65-b9110b21d38a" (UID: "c7adc085-6f10-4200-ac65-b9110b21d38a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.737762 4756 scope.go:117] "RemoveContainer" containerID="f2ae4238e33a39d35f46e263cc499924ac715102f558e96997197a7cdeab2946" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.759335 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7adc085-6f10-4200-ac65-b9110b21d38a-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.759403 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9br9\" (UniqueName: \"kubernetes.io/projected/c7adc085-6f10-4200-ac65-b9110b21d38a-kube-api-access-z9br9\") on node \"crc\" DevicePath \"\"" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.759426 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7adc085-6f10-4200-ac65-b9110b21d38a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.779180 4756 scope.go:117] "RemoveContainer" containerID="68e3ec129d54137df9592f28821221c0a7c69486fbaca0468e7b6cad1170c40a" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.842041 4756 scope.go:117] "RemoveContainer" containerID="203a704cc73734fdaea097a56535bcaf3717118a433a4b5e904d441c9838c051" Dec 03 12:02:24 crc kubenswrapper[4756]: E1203 12:02:24.845420 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"203a704cc73734fdaea097a56535bcaf3717118a433a4b5e904d441c9838c051\": container with ID starting with 203a704cc73734fdaea097a56535bcaf3717118a433a4b5e904d441c9838c051 not found: ID does not exist" containerID="203a704cc73734fdaea097a56535bcaf3717118a433a4b5e904d441c9838c051" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.845475 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"203a704cc73734fdaea097a56535bcaf3717118a433a4b5e904d441c9838c051"} err="failed to get container status \"203a704cc73734fdaea097a56535bcaf3717118a433a4b5e904d441c9838c051\": rpc error: code = NotFound desc = could not find container \"203a704cc73734fdaea097a56535bcaf3717118a433a4b5e904d441c9838c051\": container with ID starting with 203a704cc73734fdaea097a56535bcaf3717118a433a4b5e904d441c9838c051 not found: ID does not exist" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.845508 4756 scope.go:117] "RemoveContainer" containerID="f2ae4238e33a39d35f46e263cc499924ac715102f558e96997197a7cdeab2946" Dec 03 12:02:24 crc kubenswrapper[4756]: E1203 12:02:24.845922 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2ae4238e33a39d35f46e263cc499924ac715102f558e96997197a7cdeab2946\": container with ID starting with f2ae4238e33a39d35f46e263cc499924ac715102f558e96997197a7cdeab2946 not found: ID does not exist" containerID="f2ae4238e33a39d35f46e263cc499924ac715102f558e96997197a7cdeab2946" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.846000 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ae4238e33a39d35f46e263cc499924ac715102f558e96997197a7cdeab2946"} err="failed to get container status \"f2ae4238e33a39d35f46e263cc499924ac715102f558e96997197a7cdeab2946\": rpc error: code = NotFound desc = could not find container \"f2ae4238e33a39d35f46e263cc499924ac715102f558e96997197a7cdeab2946\": container with ID starting with f2ae4238e33a39d35f46e263cc499924ac715102f558e96997197a7cdeab2946 not found: ID does not exist" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.846057 4756 scope.go:117] "RemoveContainer" containerID="68e3ec129d54137df9592f28821221c0a7c69486fbaca0468e7b6cad1170c40a" Dec 03 12:02:24 crc kubenswrapper[4756]: E1203 12:02:24.846355 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68e3ec129d54137df9592f28821221c0a7c69486fbaca0468e7b6cad1170c40a\": container with ID starting with 68e3ec129d54137df9592f28821221c0a7c69486fbaca0468e7b6cad1170c40a not found: ID does not exist" containerID="68e3ec129d54137df9592f28821221c0a7c69486fbaca0468e7b6cad1170c40a" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.846381 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68e3ec129d54137df9592f28821221c0a7c69486fbaca0468e7b6cad1170c40a"} err="failed to get container status \"68e3ec129d54137df9592f28821221c0a7c69486fbaca0468e7b6cad1170c40a\": rpc error: code = NotFound desc = could not find container \"68e3ec129d54137df9592f28821221c0a7c69486fbaca0468e7b6cad1170c40a\": container with ID starting with 68e3ec129d54137df9592f28821221c0a7c69486fbaca0468e7b6cad1170c40a not found: ID does not exist" Dec 03 12:02:24 crc kubenswrapper[4756]: I1203 12:02:24.994112 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-88b9w"] Dec 03 12:02:25 crc kubenswrapper[4756]: I1203 12:02:25.001441 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-88b9w"] Dec 03 12:02:25 crc kubenswrapper[4756]: I1203 12:02:25.245535 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7adc085-6f10-4200-ac65-b9110b21d38a" path="/var/lib/kubelet/pods/c7adc085-6f10-4200-ac65-b9110b21d38a/volumes" Dec 03 12:02:25 crc kubenswrapper[4756]: I1203 12:02:25.247375 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5a9d568-388c-42f0-a1a9-9deb135e0f93" path="/var/lib/kubelet/pods/f5a9d568-388c-42f0-a1a9-9deb135e0f93/volumes" Dec 03 12:02:27 crc kubenswrapper[4756]: I1203 12:02:27.722139 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zb9l8"] Dec 03 12:02:27 crc kubenswrapper[4756]: I1203 12:02:27.723105 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zb9l8" podUID="ff5dd2b5-591d-4832-992a-41525a3a65fd" containerName="registry-server" containerID="cri-o://9f32ce50ff87d3c35cf2100865b114b39e72f4f416e311be25eb53f3abb30bae" gracePeriod=2 Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.217628 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.330059 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5dd2b5-591d-4832-992a-41525a3a65fd-utilities\") pod \"ff5dd2b5-591d-4832-992a-41525a3a65fd\" (UID: \"ff5dd2b5-591d-4832-992a-41525a3a65fd\") " Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.330485 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5dd2b5-591d-4832-992a-41525a3a65fd-catalog-content\") pod \"ff5dd2b5-591d-4832-992a-41525a3a65fd\" (UID: \"ff5dd2b5-591d-4832-992a-41525a3a65fd\") " Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.330705 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddxtx\" (UniqueName: \"kubernetes.io/projected/ff5dd2b5-591d-4832-992a-41525a3a65fd-kube-api-access-ddxtx\") pod \"ff5dd2b5-591d-4832-992a-41525a3a65fd\" (UID: \"ff5dd2b5-591d-4832-992a-41525a3a65fd\") " Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.330902 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff5dd2b5-591d-4832-992a-41525a3a65fd-utilities" (OuterVolumeSpecName: "utilities") pod "ff5dd2b5-591d-4832-992a-41525a3a65fd" (UID: "ff5dd2b5-591d-4832-992a-41525a3a65fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.332587 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5dd2b5-591d-4832-992a-41525a3a65fd-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.336218 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5dd2b5-591d-4832-992a-41525a3a65fd-kube-api-access-ddxtx" (OuterVolumeSpecName: "kube-api-access-ddxtx") pod "ff5dd2b5-591d-4832-992a-41525a3a65fd" (UID: "ff5dd2b5-591d-4832-992a-41525a3a65fd"). InnerVolumeSpecName "kube-api-access-ddxtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.365511 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff5dd2b5-591d-4832-992a-41525a3a65fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff5dd2b5-591d-4832-992a-41525a3a65fd" (UID: "ff5dd2b5-591d-4832-992a-41525a3a65fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.435601 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5dd2b5-591d-4832-992a-41525a3a65fd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.435653 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddxtx\" (UniqueName: \"kubernetes.io/projected/ff5dd2b5-591d-4832-992a-41525a3a65fd-kube-api-access-ddxtx\") on node \"crc\" DevicePath \"\"" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.698462 4756 generic.go:334] "Generic (PLEG): container finished" podID="ff5dd2b5-591d-4832-992a-41525a3a65fd" containerID="9f32ce50ff87d3c35cf2100865b114b39e72f4f416e311be25eb53f3abb30bae" exitCode=0 Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.698524 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zb9l8" event={"ID":"ff5dd2b5-591d-4832-992a-41525a3a65fd","Type":"ContainerDied","Data":"9f32ce50ff87d3c35cf2100865b114b39e72f4f416e311be25eb53f3abb30bae"} Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.698564 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zb9l8" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.698591 4756 scope.go:117] "RemoveContainer" containerID="9f32ce50ff87d3c35cf2100865b114b39e72f4f416e311be25eb53f3abb30bae" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.698579 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zb9l8" event={"ID":"ff5dd2b5-591d-4832-992a-41525a3a65fd","Type":"ContainerDied","Data":"f0c9b05d8ea60c00054714b6f2b2bc7156f9b5ccb6f699f6d3ed4ce4ef55bd94"} Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.724356 4756 scope.go:117] "RemoveContainer" containerID="8c73ec0ab8b7468b0b3b73828f43f450c889e4b62c4470d23cceedf5b4e625fd" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.746078 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zb9l8"] Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.752397 4756 scope.go:117] "RemoveContainer" containerID="3d9d13a378d4a8f6718b9203f282bfa5c0b50256dc29f687702271f112618fb5" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.757911 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zb9l8"] Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.809514 4756 scope.go:117] "RemoveContainer" containerID="9f32ce50ff87d3c35cf2100865b114b39e72f4f416e311be25eb53f3abb30bae" Dec 03 12:02:28 crc kubenswrapper[4756]: E1203 12:02:28.810359 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f32ce50ff87d3c35cf2100865b114b39e72f4f416e311be25eb53f3abb30bae\": container with ID starting with 9f32ce50ff87d3c35cf2100865b114b39e72f4f416e311be25eb53f3abb30bae not found: ID does not exist" containerID="9f32ce50ff87d3c35cf2100865b114b39e72f4f416e311be25eb53f3abb30bae" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.810400 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f32ce50ff87d3c35cf2100865b114b39e72f4f416e311be25eb53f3abb30bae"} err="failed to get container status \"9f32ce50ff87d3c35cf2100865b114b39e72f4f416e311be25eb53f3abb30bae\": rpc error: code = NotFound desc = could not find container \"9f32ce50ff87d3c35cf2100865b114b39e72f4f416e311be25eb53f3abb30bae\": container with ID starting with 9f32ce50ff87d3c35cf2100865b114b39e72f4f416e311be25eb53f3abb30bae not found: ID does not exist" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.810426 4756 scope.go:117] "RemoveContainer" containerID="8c73ec0ab8b7468b0b3b73828f43f450c889e4b62c4470d23cceedf5b4e625fd" Dec 03 12:02:28 crc kubenswrapper[4756]: E1203 12:02:28.810779 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c73ec0ab8b7468b0b3b73828f43f450c889e4b62c4470d23cceedf5b4e625fd\": container with ID starting with 8c73ec0ab8b7468b0b3b73828f43f450c889e4b62c4470d23cceedf5b4e625fd not found: ID does not exist" containerID="8c73ec0ab8b7468b0b3b73828f43f450c889e4b62c4470d23cceedf5b4e625fd" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.810802 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c73ec0ab8b7468b0b3b73828f43f450c889e4b62c4470d23cceedf5b4e625fd"} err="failed to get container status \"8c73ec0ab8b7468b0b3b73828f43f450c889e4b62c4470d23cceedf5b4e625fd\": rpc error: code = NotFound desc = could not find container \"8c73ec0ab8b7468b0b3b73828f43f450c889e4b62c4470d23cceedf5b4e625fd\": container with ID starting with 8c73ec0ab8b7468b0b3b73828f43f450c889e4b62c4470d23cceedf5b4e625fd not found: ID does not exist" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.810815 4756 scope.go:117] "RemoveContainer" containerID="3d9d13a378d4a8f6718b9203f282bfa5c0b50256dc29f687702271f112618fb5" Dec 03 12:02:28 crc kubenswrapper[4756]: E1203 12:02:28.811092 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d9d13a378d4a8f6718b9203f282bfa5c0b50256dc29f687702271f112618fb5\": container with ID starting with 3d9d13a378d4a8f6718b9203f282bfa5c0b50256dc29f687702271f112618fb5 not found: ID does not exist" containerID="3d9d13a378d4a8f6718b9203f282bfa5c0b50256dc29f687702271f112618fb5" Dec 03 12:02:28 crc kubenswrapper[4756]: I1203 12:02:28.811141 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d9d13a378d4a8f6718b9203f282bfa5c0b50256dc29f687702271f112618fb5"} err="failed to get container status \"3d9d13a378d4a8f6718b9203f282bfa5c0b50256dc29f687702271f112618fb5\": rpc error: code = NotFound desc = could not find container \"3d9d13a378d4a8f6718b9203f282bfa5c0b50256dc29f687702271f112618fb5\": container with ID starting with 3d9d13a378d4a8f6718b9203f282bfa5c0b50256dc29f687702271f112618fb5 not found: ID does not exist" Dec 03 12:02:29 crc kubenswrapper[4756]: I1203 12:02:29.250203 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff5dd2b5-591d-4832-992a-41525a3a65fd" path="/var/lib/kubelet/pods/ff5dd2b5-591d-4832-992a-41525a3a65fd/volumes" Dec 03 12:02:52 crc kubenswrapper[4756]: I1203 12:02:52.607061 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 12:02:52 crc kubenswrapper[4756]: I1203 12:02:52.607609 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 12:03:22 crc kubenswrapper[4756]: I1203 12:03:22.607335 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 12:03:22 crc kubenswrapper[4756]: I1203 12:03:22.607921 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 12:03:22 crc kubenswrapper[4756]: I1203 12:03:22.608010 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 12:03:22 crc kubenswrapper[4756]: I1203 12:03:22.609236 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"228e33b676c4abc082048d97b3f25a6d75245a7a0ec40b23df0b7da2dce8ecca"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 12:03:22 crc kubenswrapper[4756]: I1203 12:03:22.609294 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://228e33b676c4abc082048d97b3f25a6d75245a7a0ec40b23df0b7da2dce8ecca" gracePeriod=600 Dec 03 12:03:23 crc kubenswrapper[4756]: I1203 12:03:23.219757 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="228e33b676c4abc082048d97b3f25a6d75245a7a0ec40b23df0b7da2dce8ecca" exitCode=0 Dec 03 12:03:23 crc kubenswrapper[4756]: I1203 12:03:23.219889 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"228e33b676c4abc082048d97b3f25a6d75245a7a0ec40b23df0b7da2dce8ecca"} Dec 03 12:03:23 crc kubenswrapper[4756]: I1203 12:03:23.220109 4756 scope.go:117] "RemoveContainer" containerID="1b50024f542abd1430744a12ace1fb314adfe47cbe0f7a6d7daa832c1a56e066" Dec 03 12:03:24 crc kubenswrapper[4756]: I1203 12:03:24.232595 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f"} Dec 03 12:03:42 crc kubenswrapper[4756]: I1203 12:03:42.409676 4756 generic.go:334] "Generic (PLEG): container finished" podID="c21f54f7-f69f-4d1a-9dcf-4bf45a638db0" containerID="2f5c1a60a1b8d20e545c60ec0ec356ec58b38a7b361b3a03d7873a526fa419b2" exitCode=0 Dec 03 12:03:42 crc kubenswrapper[4756]: I1203 12:03:42.409753 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9cbqb/must-gather-2wp67" event={"ID":"c21f54f7-f69f-4d1a-9dcf-4bf45a638db0","Type":"ContainerDied","Data":"2f5c1a60a1b8d20e545c60ec0ec356ec58b38a7b361b3a03d7873a526fa419b2"} Dec 03 12:03:42 crc kubenswrapper[4756]: I1203 12:03:42.410897 4756 scope.go:117] "RemoveContainer" containerID="2f5c1a60a1b8d20e545c60ec0ec356ec58b38a7b361b3a03d7873a526fa419b2" Dec 03 12:03:42 crc kubenswrapper[4756]: I1203 12:03:42.676607 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9cbqb_must-gather-2wp67_c21f54f7-f69f-4d1a-9dcf-4bf45a638db0/gather/0.log" Dec 03 12:03:50 crc kubenswrapper[4756]: I1203 12:03:50.002630 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9cbqb/must-gather-2wp67"] Dec 03 12:03:50 crc kubenswrapper[4756]: I1203 12:03:50.003651 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9cbqb/must-gather-2wp67" podUID="c21f54f7-f69f-4d1a-9dcf-4bf45a638db0" containerName="copy" containerID="cri-o://059f1b7f238a49bd49b23a9904b125b7b56102a7a96585964e90a012c679c2b6" gracePeriod=2 Dec 03 12:03:50 crc kubenswrapper[4756]: I1203 12:03:50.012208 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9cbqb/must-gather-2wp67"] Dec 03 12:03:50 crc kubenswrapper[4756]: I1203 12:03:50.486371 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9cbqb_must-gather-2wp67_c21f54f7-f69f-4d1a-9dcf-4bf45a638db0/copy/0.log" Dec 03 12:03:50 crc kubenswrapper[4756]: I1203 12:03:50.487385 4756 generic.go:334] "Generic (PLEG): container finished" podID="c21f54f7-f69f-4d1a-9dcf-4bf45a638db0" containerID="059f1b7f238a49bd49b23a9904b125b7b56102a7a96585964e90a012c679c2b6" exitCode=143 Dec 03 12:03:50 crc kubenswrapper[4756]: I1203 12:03:50.756187 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9cbqb_must-gather-2wp67_c21f54f7-f69f-4d1a-9dcf-4bf45a638db0/copy/0.log" Dec 03 12:03:50 crc kubenswrapper[4756]: I1203 12:03:50.756657 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/must-gather-2wp67" Dec 03 12:03:50 crc kubenswrapper[4756]: I1203 12:03:50.947283 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c21f54f7-f69f-4d1a-9dcf-4bf45a638db0-must-gather-output\") pod \"c21f54f7-f69f-4d1a-9dcf-4bf45a638db0\" (UID: \"c21f54f7-f69f-4d1a-9dcf-4bf45a638db0\") " Dec 03 12:03:50 crc kubenswrapper[4756]: I1203 12:03:50.947471 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcl9d\" (UniqueName: \"kubernetes.io/projected/c21f54f7-f69f-4d1a-9dcf-4bf45a638db0-kube-api-access-dcl9d\") pod \"c21f54f7-f69f-4d1a-9dcf-4bf45a638db0\" (UID: \"c21f54f7-f69f-4d1a-9dcf-4bf45a638db0\") " Dec 03 12:03:50 crc kubenswrapper[4756]: I1203 12:03:50.954873 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21f54f7-f69f-4d1a-9dcf-4bf45a638db0-kube-api-access-dcl9d" (OuterVolumeSpecName: "kube-api-access-dcl9d") pod "c21f54f7-f69f-4d1a-9dcf-4bf45a638db0" (UID: "c21f54f7-f69f-4d1a-9dcf-4bf45a638db0"). InnerVolumeSpecName "kube-api-access-dcl9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:03:51 crc kubenswrapper[4756]: I1203 12:03:51.049236 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcl9d\" (UniqueName: \"kubernetes.io/projected/c21f54f7-f69f-4d1a-9dcf-4bf45a638db0-kube-api-access-dcl9d\") on node \"crc\" DevicePath \"\"" Dec 03 12:03:51 crc kubenswrapper[4756]: I1203 12:03:51.091320 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21f54f7-f69f-4d1a-9dcf-4bf45a638db0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c21f54f7-f69f-4d1a-9dcf-4bf45a638db0" (UID: "c21f54f7-f69f-4d1a-9dcf-4bf45a638db0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:03:51 crc kubenswrapper[4756]: I1203 12:03:51.152011 4756 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c21f54f7-f69f-4d1a-9dcf-4bf45a638db0-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 03 12:03:51 crc kubenswrapper[4756]: I1203 12:03:51.296574 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21f54f7-f69f-4d1a-9dcf-4bf45a638db0" path="/var/lib/kubelet/pods/c21f54f7-f69f-4d1a-9dcf-4bf45a638db0/volumes" Dec 03 12:03:51 crc kubenswrapper[4756]: I1203 12:03:51.506496 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9cbqb_must-gather-2wp67_c21f54f7-f69f-4d1a-9dcf-4bf45a638db0/copy/0.log" Dec 03 12:03:51 crc kubenswrapper[4756]: I1203 12:03:51.506893 4756 scope.go:117] "RemoveContainer" containerID="059f1b7f238a49bd49b23a9904b125b7b56102a7a96585964e90a012c679c2b6" Dec 03 12:03:51 crc kubenswrapper[4756]: I1203 12:03:51.507084 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9cbqb/must-gather-2wp67" Dec 03 12:03:51 crc kubenswrapper[4756]: I1203 12:03:51.531183 4756 scope.go:117] "RemoveContainer" containerID="2f5c1a60a1b8d20e545c60ec0ec356ec58b38a7b361b3a03d7873a526fa419b2" Dec 03 12:04:12 crc kubenswrapper[4756]: I1203 12:04:12.538052 4756 scope.go:117] "RemoveContainer" containerID="c7049b81ed94164aca4b79d35efdcd8df6678b3db51fbeb7e034886c2804e0aa" Dec 03 12:05:52 crc kubenswrapper[4756]: I1203 12:05:52.607278 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 12:05:52 crc kubenswrapper[4756]: I1203 12:05:52.607828 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 12:06:22 crc kubenswrapper[4756]: I1203 12:06:22.607751 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 12:06:22 crc kubenswrapper[4756]: I1203 12:06:22.608276 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 12:06:52 crc kubenswrapper[4756]: I1203 12:06:52.607759 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 12:06:52 crc kubenswrapper[4756]: I1203 12:06:52.608847 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 12:06:52 crc kubenswrapper[4756]: I1203 12:06:52.608932 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 12:06:52 crc kubenswrapper[4756]: I1203 12:06:52.610079 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 12:06:52 crc kubenswrapper[4756]: I1203 12:06:52.610130 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" gracePeriod=600 Dec 03 12:06:53 crc kubenswrapper[4756]: E1203 12:06:53.297907 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:06:53 crc kubenswrapper[4756]: I1203 12:06:53.311885 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" exitCode=0 Dec 03 12:06:53 crc kubenswrapper[4756]: I1203 12:06:53.311929 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f"} Dec 03 12:06:53 crc kubenswrapper[4756]: I1203 12:06:53.312093 4756 scope.go:117] "RemoveContainer" containerID="228e33b676c4abc082048d97b3f25a6d75245a7a0ec40b23df0b7da2dce8ecca" Dec 03 12:06:53 crc kubenswrapper[4756]: I1203 12:06:53.312995 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:06:53 crc kubenswrapper[4756]: E1203 12:06:53.313352 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.932369 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c2sww/must-gather-g8k5l"] Dec 03 12:06:57 crc kubenswrapper[4756]: E1203 12:06:57.933439 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5dd2b5-591d-4832-992a-41525a3a65fd" containerName="registry-server" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.933459 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5dd2b5-591d-4832-992a-41525a3a65fd" containerName="registry-server" Dec 03 12:06:57 crc kubenswrapper[4756]: E1203 12:06:57.933473 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5dd2b5-591d-4832-992a-41525a3a65fd" containerName="extract-content" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.933482 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5dd2b5-591d-4832-992a-41525a3a65fd" containerName="extract-content" Dec 03 12:06:57 crc kubenswrapper[4756]: E1203 12:06:57.933502 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a9d568-388c-42f0-a1a9-9deb135e0f93" containerName="extract-content" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.933510 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a9d568-388c-42f0-a1a9-9deb135e0f93" containerName="extract-content" Dec 03 12:06:57 crc kubenswrapper[4756]: E1203 12:06:57.933529 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5dd2b5-591d-4832-992a-41525a3a65fd" containerName="extract-utilities" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.933537 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5dd2b5-591d-4832-992a-41525a3a65fd" containerName="extract-utilities" Dec 03 12:06:57 crc kubenswrapper[4756]: E1203 12:06:57.933554 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7adc085-6f10-4200-ac65-b9110b21d38a" containerName="extract-utilities" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.933561 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7adc085-6f10-4200-ac65-b9110b21d38a" containerName="extract-utilities" Dec 03 12:06:57 crc kubenswrapper[4756]: E1203 12:06:57.933585 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a9d568-388c-42f0-a1a9-9deb135e0f93" containerName="registry-server" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.933592 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a9d568-388c-42f0-a1a9-9deb135e0f93" containerName="registry-server" Dec 03 12:06:57 crc kubenswrapper[4756]: E1203 12:06:57.933603 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21f54f7-f69f-4d1a-9dcf-4bf45a638db0" containerName="gather" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.933610 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21f54f7-f69f-4d1a-9dcf-4bf45a638db0" containerName="gather" Dec 03 12:06:57 crc kubenswrapper[4756]: E1203 12:06:57.933622 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7adc085-6f10-4200-ac65-b9110b21d38a" containerName="registry-server" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.933629 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7adc085-6f10-4200-ac65-b9110b21d38a" containerName="registry-server" Dec 03 12:06:57 crc kubenswrapper[4756]: E1203 12:06:57.933648 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7adc085-6f10-4200-ac65-b9110b21d38a" containerName="extract-content" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.933655 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7adc085-6f10-4200-ac65-b9110b21d38a" containerName="extract-content" Dec 03 12:06:57 crc kubenswrapper[4756]: E1203 12:06:57.933675 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a9d568-388c-42f0-a1a9-9deb135e0f93" containerName="extract-utilities" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.933682 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a9d568-388c-42f0-a1a9-9deb135e0f93" containerName="extract-utilities" Dec 03 12:06:57 crc kubenswrapper[4756]: E1203 12:06:57.933694 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21f54f7-f69f-4d1a-9dcf-4bf45a638db0" containerName="copy" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.933701 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21f54f7-f69f-4d1a-9dcf-4bf45a638db0" containerName="copy" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.933980 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5dd2b5-591d-4832-992a-41525a3a65fd" containerName="registry-server" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.934000 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7adc085-6f10-4200-ac65-b9110b21d38a" containerName="registry-server" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.934011 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5a9d568-388c-42f0-a1a9-9deb135e0f93" containerName="registry-server" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.934027 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21f54f7-f69f-4d1a-9dcf-4bf45a638db0" containerName="gather" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.934042 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21f54f7-f69f-4d1a-9dcf-4bf45a638db0" containerName="copy" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.935342 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/must-gather-g8k5l" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.939022 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c2sww"/"openshift-service-ca.crt" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.949727 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c2sww"/"kube-root-ca.crt" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.974483 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c2sww/must-gather-g8k5l"] Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.978083 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fn8f\" (UniqueName: \"kubernetes.io/projected/5a79df98-1323-4fd3-a2e6-600935fad91e-kube-api-access-4fn8f\") pod \"must-gather-g8k5l\" (UID: \"5a79df98-1323-4fd3-a2e6-600935fad91e\") " pod="openshift-must-gather-c2sww/must-gather-g8k5l" Dec 03 12:06:57 crc kubenswrapper[4756]: I1203 12:06:57.978192 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a79df98-1323-4fd3-a2e6-600935fad91e-must-gather-output\") pod \"must-gather-g8k5l\" (UID: \"5a79df98-1323-4fd3-a2e6-600935fad91e\") " pod="openshift-must-gather-c2sww/must-gather-g8k5l" Dec 03 12:06:58 crc kubenswrapper[4756]: I1203 12:06:58.079637 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a79df98-1323-4fd3-a2e6-600935fad91e-must-gather-output\") pod \"must-gather-g8k5l\" (UID: \"5a79df98-1323-4fd3-a2e6-600935fad91e\") " pod="openshift-must-gather-c2sww/must-gather-g8k5l" Dec 03 12:06:58 crc kubenswrapper[4756]: I1203 12:06:58.080295 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fn8f\" (UniqueName: \"kubernetes.io/projected/5a79df98-1323-4fd3-a2e6-600935fad91e-kube-api-access-4fn8f\") pod \"must-gather-g8k5l\" (UID: \"5a79df98-1323-4fd3-a2e6-600935fad91e\") " pod="openshift-must-gather-c2sww/must-gather-g8k5l" Dec 03 12:06:58 crc kubenswrapper[4756]: I1203 12:06:58.080652 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a79df98-1323-4fd3-a2e6-600935fad91e-must-gather-output\") pod \"must-gather-g8k5l\" (UID: \"5a79df98-1323-4fd3-a2e6-600935fad91e\") " pod="openshift-must-gather-c2sww/must-gather-g8k5l" Dec 03 12:06:58 crc kubenswrapper[4756]: I1203 12:06:58.103967 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fn8f\" (UniqueName: \"kubernetes.io/projected/5a79df98-1323-4fd3-a2e6-600935fad91e-kube-api-access-4fn8f\") pod \"must-gather-g8k5l\" (UID: \"5a79df98-1323-4fd3-a2e6-600935fad91e\") " pod="openshift-must-gather-c2sww/must-gather-g8k5l" Dec 03 12:06:58 crc kubenswrapper[4756]: I1203 12:06:58.259637 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/must-gather-g8k5l" Dec 03 12:06:58 crc kubenswrapper[4756]: I1203 12:06:58.769748 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c2sww/must-gather-g8k5l"] Dec 03 12:06:59 crc kubenswrapper[4756]: I1203 12:06:59.385886 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c2sww/must-gather-g8k5l" event={"ID":"5a79df98-1323-4fd3-a2e6-600935fad91e","Type":"ContainerStarted","Data":"8d00ed415d2c1b6202e841eaa8e09b1052638ab9c2b66bcf7f2558e6ae7bacb5"} Dec 03 12:06:59 crc kubenswrapper[4756]: I1203 12:06:59.386161 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c2sww/must-gather-g8k5l" event={"ID":"5a79df98-1323-4fd3-a2e6-600935fad91e","Type":"ContainerStarted","Data":"b1beaf1719b25bbf288a1fd0640661366dff1ca6bc075e28db65179eb2888296"} Dec 03 12:07:00 crc kubenswrapper[4756]: I1203 12:07:00.397744 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c2sww/must-gather-g8k5l" event={"ID":"5a79df98-1323-4fd3-a2e6-600935fad91e","Type":"ContainerStarted","Data":"da30b39c41647297faff0528cbb4830f09fba5051bf29ddb37e88ccf7eb7d062"} Dec 03 12:07:00 crc kubenswrapper[4756]: I1203 12:07:00.429381 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c2sww/must-gather-g8k5l" podStartSLOduration=3.429353721 podStartE2EDuration="3.429353721s" podCreationTimestamp="2025-12-03 12:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:07:00.418454757 +0000 UTC m=+4431.448456001" watchObservedRunningTime="2025-12-03 12:07:00.429353721 +0000 UTC m=+4431.459354975" Dec 03 12:07:01 crc kubenswrapper[4756]: I1203 12:07:01.803186 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="25a8024d-1033-41f9-a53f-6c5119388b40" containerName="galera" probeResult="failure" output="command timed out" Dec 03 12:07:01 crc kubenswrapper[4756]: I1203 12:07:01.804387 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="25a8024d-1033-41f9-a53f-6c5119388b40" containerName="galera" probeResult="failure" output="command timed out" Dec 03 12:07:03 crc kubenswrapper[4756]: I1203 12:07:03.250941 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c2sww/crc-debug-67vhb"] Dec 03 12:07:03 crc kubenswrapper[4756]: I1203 12:07:03.252574 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/crc-debug-67vhb" Dec 03 12:07:03 crc kubenswrapper[4756]: I1203 12:07:03.255576 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-c2sww"/"default-dockercfg-8zn9c" Dec 03 12:07:03 crc kubenswrapper[4756]: I1203 12:07:03.386032 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f49pk\" (UniqueName: \"kubernetes.io/projected/8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0-kube-api-access-f49pk\") pod \"crc-debug-67vhb\" (UID: \"8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0\") " pod="openshift-must-gather-c2sww/crc-debug-67vhb" Dec 03 12:07:03 crc kubenswrapper[4756]: I1203 12:07:03.386140 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0-host\") pod \"crc-debug-67vhb\" (UID: \"8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0\") " pod="openshift-must-gather-c2sww/crc-debug-67vhb" Dec 03 12:07:03 crc kubenswrapper[4756]: I1203 12:07:03.487762 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f49pk\" (UniqueName: \"kubernetes.io/projected/8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0-kube-api-access-f49pk\") pod \"crc-debug-67vhb\" (UID: \"8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0\") " pod="openshift-must-gather-c2sww/crc-debug-67vhb" Dec 03 12:07:03 crc kubenswrapper[4756]: I1203 12:07:03.487825 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0-host\") pod \"crc-debug-67vhb\" (UID: \"8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0\") " pod="openshift-must-gather-c2sww/crc-debug-67vhb" Dec 03 12:07:03 crc kubenswrapper[4756]: I1203 12:07:03.487967 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0-host\") pod \"crc-debug-67vhb\" (UID: \"8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0\") " pod="openshift-must-gather-c2sww/crc-debug-67vhb" Dec 03 12:07:03 crc kubenswrapper[4756]: I1203 12:07:03.693719 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f49pk\" (UniqueName: \"kubernetes.io/projected/8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0-kube-api-access-f49pk\") pod \"crc-debug-67vhb\" (UID: \"8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0\") " pod="openshift-must-gather-c2sww/crc-debug-67vhb" Dec 03 12:07:03 crc kubenswrapper[4756]: I1203 12:07:03.877336 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/crc-debug-67vhb" Dec 03 12:07:04 crc kubenswrapper[4756]: I1203 12:07:04.435246 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c2sww/crc-debug-67vhb" event={"ID":"8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0","Type":"ContainerStarted","Data":"dc055d2821fe0faa3a23d2e318129fee3d501b1c45ef9ab19cb9a3ec05eab8b1"} Dec 03 12:07:05 crc kubenswrapper[4756]: I1203 12:07:05.447115 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c2sww/crc-debug-67vhb" event={"ID":"8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0","Type":"ContainerStarted","Data":"5f393e77113e47a16752b3f36e38c37b8a91a93dcd8ef8503c1a57a1b6171ed5"} Dec 03 12:07:05 crc kubenswrapper[4756]: I1203 12:07:05.466555 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c2sww/crc-debug-67vhb" podStartSLOduration=2.466514426 podStartE2EDuration="2.466514426s" podCreationTimestamp="2025-12-03 12:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 12:07:05.462970784 +0000 UTC m=+4436.492972038" watchObservedRunningTime="2025-12-03 12:07:05.466514426 +0000 UTC m=+4436.496515660" Dec 03 12:07:09 crc kubenswrapper[4756]: I1203 12:07:09.243590 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:07:09 crc kubenswrapper[4756]: E1203 12:07:09.244520 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:07:22 crc kubenswrapper[4756]: I1203 12:07:22.233930 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:07:22 crc kubenswrapper[4756]: E1203 12:07:22.236350 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:07:37 crc kubenswrapper[4756]: I1203 12:07:37.235174 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:07:37 crc kubenswrapper[4756]: E1203 12:07:37.236192 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:07:44 crc kubenswrapper[4756]: I1203 12:07:44.824871 4756 generic.go:334] "Generic (PLEG): container finished" podID="8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0" containerID="5f393e77113e47a16752b3f36e38c37b8a91a93dcd8ef8503c1a57a1b6171ed5" exitCode=0 Dec 03 12:07:44 crc kubenswrapper[4756]: I1203 12:07:44.824972 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c2sww/crc-debug-67vhb" event={"ID":"8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0","Type":"ContainerDied","Data":"5f393e77113e47a16752b3f36e38c37b8a91a93dcd8ef8503c1a57a1b6171ed5"} Dec 03 12:07:45 crc kubenswrapper[4756]: I1203 12:07:45.965850 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/crc-debug-67vhb" Dec 03 12:07:46 crc kubenswrapper[4756]: I1203 12:07:46.001868 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c2sww/crc-debug-67vhb"] Dec 03 12:07:46 crc kubenswrapper[4756]: I1203 12:07:46.010000 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c2sww/crc-debug-67vhb"] Dec 03 12:07:46 crc kubenswrapper[4756]: I1203 12:07:46.012717 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0-host\") pod \"8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0\" (UID: \"8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0\") " Dec 03 12:07:46 crc kubenswrapper[4756]: I1203 12:07:46.012861 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0-host" (OuterVolumeSpecName: "host") pod "8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0" (UID: "8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 12:07:46 crc kubenswrapper[4756]: I1203 12:07:46.012887 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f49pk\" (UniqueName: \"kubernetes.io/projected/8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0-kube-api-access-f49pk\") pod \"8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0\" (UID: \"8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0\") " Dec 03 12:07:46 crc kubenswrapper[4756]: I1203 12:07:46.013786 4756 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0-host\") on node \"crc\" DevicePath \"\"" Dec 03 12:07:46 crc kubenswrapper[4756]: I1203 12:07:46.033155 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0-kube-api-access-f49pk" (OuterVolumeSpecName: "kube-api-access-f49pk") pod "8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0" (UID: "8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0"). InnerVolumeSpecName "kube-api-access-f49pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:07:46 crc kubenswrapper[4756]: I1203 12:07:46.116015 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f49pk\" (UniqueName: \"kubernetes.io/projected/8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0-kube-api-access-f49pk\") on node \"crc\" DevicePath \"\"" Dec 03 12:07:46 crc kubenswrapper[4756]: I1203 12:07:46.846031 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc055d2821fe0faa3a23d2e318129fee3d501b1c45ef9ab19cb9a3ec05eab8b1" Dec 03 12:07:46 crc kubenswrapper[4756]: I1203 12:07:46.846117 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/crc-debug-67vhb" Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.180622 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c2sww/crc-debug-rv6k8"] Dec 03 12:07:47 crc kubenswrapper[4756]: E1203 12:07:47.181056 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0" containerName="container-00" Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.181073 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0" containerName="container-00" Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.181320 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0" containerName="container-00" Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.182075 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/crc-debug-rv6k8" Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.184634 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-c2sww"/"default-dockercfg-8zn9c" Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.238025 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a27648a-ee40-4626-857d-caedeb84c4bf-host\") pod \"crc-debug-rv6k8\" (UID: \"9a27648a-ee40-4626-857d-caedeb84c4bf\") " pod="openshift-must-gather-c2sww/crc-debug-rv6k8" Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.238200 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ffqt\" (UniqueName: \"kubernetes.io/projected/9a27648a-ee40-4626-857d-caedeb84c4bf-kube-api-access-2ffqt\") pod \"crc-debug-rv6k8\" (UID: \"9a27648a-ee40-4626-857d-caedeb84c4bf\") " pod="openshift-must-gather-c2sww/crc-debug-rv6k8" Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.248084 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0" path="/var/lib/kubelet/pods/8dd9fba6-9f83-4aaa-a1f7-06f4960a1ac0/volumes" Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.340647 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ffqt\" (UniqueName: \"kubernetes.io/projected/9a27648a-ee40-4626-857d-caedeb84c4bf-kube-api-access-2ffqt\") pod \"crc-debug-rv6k8\" (UID: \"9a27648a-ee40-4626-857d-caedeb84c4bf\") " pod="openshift-must-gather-c2sww/crc-debug-rv6k8" Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.340881 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a27648a-ee40-4626-857d-caedeb84c4bf-host\") pod \"crc-debug-rv6k8\" (UID: \"9a27648a-ee40-4626-857d-caedeb84c4bf\") " pod="openshift-must-gather-c2sww/crc-debug-rv6k8" Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.341023 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a27648a-ee40-4626-857d-caedeb84c4bf-host\") pod \"crc-debug-rv6k8\" (UID: \"9a27648a-ee40-4626-857d-caedeb84c4bf\") " pod="openshift-must-gather-c2sww/crc-debug-rv6k8" Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.362885 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ffqt\" (UniqueName: \"kubernetes.io/projected/9a27648a-ee40-4626-857d-caedeb84c4bf-kube-api-access-2ffqt\") pod \"crc-debug-rv6k8\" (UID: \"9a27648a-ee40-4626-857d-caedeb84c4bf\") " pod="openshift-must-gather-c2sww/crc-debug-rv6k8" Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.503754 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/crc-debug-rv6k8" Dec 03 12:07:47 crc kubenswrapper[4756]: W1203 12:07:47.540146 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a27648a_ee40_4626_857d_caedeb84c4bf.slice/crio-8537b562dce8fdb0e4ec90ba43d24b7eca96b4a4d039915cba0fc00cd0775dac WatchSource:0}: Error finding container 8537b562dce8fdb0e4ec90ba43d24b7eca96b4a4d039915cba0fc00cd0775dac: Status 404 returned error can't find the container with id 8537b562dce8fdb0e4ec90ba43d24b7eca96b4a4d039915cba0fc00cd0775dac Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.860594 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c2sww/crc-debug-rv6k8" event={"ID":"9a27648a-ee40-4626-857d-caedeb84c4bf","Type":"ContainerStarted","Data":"c6e5944c010ee77d8bf86d14c3c680680db2b52d84737708c9a2a17889f45514"} Dec 03 12:07:47 crc kubenswrapper[4756]: I1203 12:07:47.860922 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c2sww/crc-debug-rv6k8" event={"ID":"9a27648a-ee40-4626-857d-caedeb84c4bf","Type":"ContainerStarted","Data":"8537b562dce8fdb0e4ec90ba43d24b7eca96b4a4d039915cba0fc00cd0775dac"} Dec 03 12:07:48 crc kubenswrapper[4756]: I1203 12:07:48.453327 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c2sww/crc-debug-rv6k8"] Dec 03 12:07:48 crc kubenswrapper[4756]: I1203 12:07:48.461200 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c2sww/crc-debug-rv6k8"] Dec 03 12:07:48 crc kubenswrapper[4756]: I1203 12:07:48.872708 4756 generic.go:334] "Generic (PLEG): container finished" podID="9a27648a-ee40-4626-857d-caedeb84c4bf" containerID="c6e5944c010ee77d8bf86d14c3c680680db2b52d84737708c9a2a17889f45514" exitCode=0 Dec 03 12:07:48 crc kubenswrapper[4756]: I1203 12:07:48.999496 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/crc-debug-rv6k8" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.074607 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ffqt\" (UniqueName: \"kubernetes.io/projected/9a27648a-ee40-4626-857d-caedeb84c4bf-kube-api-access-2ffqt\") pod \"9a27648a-ee40-4626-857d-caedeb84c4bf\" (UID: \"9a27648a-ee40-4626-857d-caedeb84c4bf\") " Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.074714 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a27648a-ee40-4626-857d-caedeb84c4bf-host\") pod \"9a27648a-ee40-4626-857d-caedeb84c4bf\" (UID: \"9a27648a-ee40-4626-857d-caedeb84c4bf\") " Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.074812 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a27648a-ee40-4626-857d-caedeb84c4bf-host" (OuterVolumeSpecName: "host") pod "9a27648a-ee40-4626-857d-caedeb84c4bf" (UID: "9a27648a-ee40-4626-857d-caedeb84c4bf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.075334 4756 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a27648a-ee40-4626-857d-caedeb84c4bf-host\") on node \"crc\" DevicePath \"\"" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.592013 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a27648a-ee40-4626-857d-caedeb84c4bf-kube-api-access-2ffqt" (OuterVolumeSpecName: "kube-api-access-2ffqt") pod "9a27648a-ee40-4626-857d-caedeb84c4bf" (UID: "9a27648a-ee40-4626-857d-caedeb84c4bf"). InnerVolumeSpecName "kube-api-access-2ffqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.620640 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c2sww/crc-debug-gnvrf"] Dec 03 12:07:49 crc kubenswrapper[4756]: E1203 12:07:49.621342 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a27648a-ee40-4626-857d-caedeb84c4bf" containerName="container-00" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.621403 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a27648a-ee40-4626-857d-caedeb84c4bf" containerName="container-00" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.621839 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a27648a-ee40-4626-857d-caedeb84c4bf" containerName="container-00" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.622918 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/crc-debug-gnvrf" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.692737 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c39d444-04cd-417d-8c73-9631356dec48-host\") pod \"crc-debug-gnvrf\" (UID: \"4c39d444-04cd-417d-8c73-9631356dec48\") " pod="openshift-must-gather-c2sww/crc-debug-gnvrf" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.693008 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9kcq\" (UniqueName: \"kubernetes.io/projected/4c39d444-04cd-417d-8c73-9631356dec48-kube-api-access-z9kcq\") pod \"crc-debug-gnvrf\" (UID: \"4c39d444-04cd-417d-8c73-9631356dec48\") " pod="openshift-must-gather-c2sww/crc-debug-gnvrf" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.693197 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ffqt\" (UniqueName: \"kubernetes.io/projected/9a27648a-ee40-4626-857d-caedeb84c4bf-kube-api-access-2ffqt\") on node \"crc\" DevicePath \"\"" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.803290 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9kcq\" (UniqueName: \"kubernetes.io/projected/4c39d444-04cd-417d-8c73-9631356dec48-kube-api-access-z9kcq\") pod \"crc-debug-gnvrf\" (UID: \"4c39d444-04cd-417d-8c73-9631356dec48\") " pod="openshift-must-gather-c2sww/crc-debug-gnvrf" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.803440 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c39d444-04cd-417d-8c73-9631356dec48-host\") pod \"crc-debug-gnvrf\" (UID: \"4c39d444-04cd-417d-8c73-9631356dec48\") " pod="openshift-must-gather-c2sww/crc-debug-gnvrf" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.803532 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c39d444-04cd-417d-8c73-9631356dec48-host\") pod \"crc-debug-gnvrf\" (UID: \"4c39d444-04cd-417d-8c73-9631356dec48\") " pod="openshift-must-gather-c2sww/crc-debug-gnvrf" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.824576 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9kcq\" (UniqueName: \"kubernetes.io/projected/4c39d444-04cd-417d-8c73-9631356dec48-kube-api-access-z9kcq\") pod \"crc-debug-gnvrf\" (UID: \"4c39d444-04cd-417d-8c73-9631356dec48\") " pod="openshift-must-gather-c2sww/crc-debug-gnvrf" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.883134 4756 scope.go:117] "RemoveContainer" containerID="c6e5944c010ee77d8bf86d14c3c680680db2b52d84737708c9a2a17889f45514" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.883300 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/crc-debug-rv6k8" Dec 03 12:07:49 crc kubenswrapper[4756]: I1203 12:07:49.963114 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/crc-debug-gnvrf" Dec 03 12:07:49 crc kubenswrapper[4756]: W1203 12:07:49.995346 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c39d444_04cd_417d_8c73_9631356dec48.slice/crio-09a3575e20ea29c474f7b7ea6e8bceb6ac9e5767897ad51130be5f8db8796a6f WatchSource:0}: Error finding container 09a3575e20ea29c474f7b7ea6e8bceb6ac9e5767897ad51130be5f8db8796a6f: Status 404 returned error can't find the container with id 09a3575e20ea29c474f7b7ea6e8bceb6ac9e5767897ad51130be5f8db8796a6f Dec 03 12:07:50 crc kubenswrapper[4756]: I1203 12:07:50.897161 4756 generic.go:334] "Generic (PLEG): container finished" podID="4c39d444-04cd-417d-8c73-9631356dec48" containerID="cd4a31197dd8abf34c66dff4d004d7cab614c74566d16716df094659204352f3" exitCode=0 Dec 03 12:07:50 crc kubenswrapper[4756]: I1203 12:07:50.897294 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c2sww/crc-debug-gnvrf" event={"ID":"4c39d444-04cd-417d-8c73-9631356dec48","Type":"ContainerDied","Data":"cd4a31197dd8abf34c66dff4d004d7cab614c74566d16716df094659204352f3"} Dec 03 12:07:50 crc kubenswrapper[4756]: I1203 12:07:50.897551 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c2sww/crc-debug-gnvrf" event={"ID":"4c39d444-04cd-417d-8c73-9631356dec48","Type":"ContainerStarted","Data":"09a3575e20ea29c474f7b7ea6e8bceb6ac9e5767897ad51130be5f8db8796a6f"} Dec 03 12:07:50 crc kubenswrapper[4756]: I1203 12:07:50.937833 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c2sww/crc-debug-gnvrf"] Dec 03 12:07:50 crc kubenswrapper[4756]: I1203 12:07:50.948942 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c2sww/crc-debug-gnvrf"] Dec 03 12:07:51 crc kubenswrapper[4756]: I1203 12:07:51.246034 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a27648a-ee40-4626-857d-caedeb84c4bf" path="/var/lib/kubelet/pods/9a27648a-ee40-4626-857d-caedeb84c4bf/volumes" Dec 03 12:07:52 crc kubenswrapper[4756]: I1203 12:07:52.033089 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/crc-debug-gnvrf" Dec 03 12:07:52 crc kubenswrapper[4756]: I1203 12:07:52.189735 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c39d444-04cd-417d-8c73-9631356dec48-host\") pod \"4c39d444-04cd-417d-8c73-9631356dec48\" (UID: \"4c39d444-04cd-417d-8c73-9631356dec48\") " Dec 03 12:07:52 crc kubenswrapper[4756]: I1203 12:07:52.189896 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c39d444-04cd-417d-8c73-9631356dec48-host" (OuterVolumeSpecName: "host") pod "4c39d444-04cd-417d-8c73-9631356dec48" (UID: "4c39d444-04cd-417d-8c73-9631356dec48"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 12:07:52 crc kubenswrapper[4756]: I1203 12:07:52.189956 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9kcq\" (UniqueName: \"kubernetes.io/projected/4c39d444-04cd-417d-8c73-9631356dec48-kube-api-access-z9kcq\") pod \"4c39d444-04cd-417d-8c73-9631356dec48\" (UID: \"4c39d444-04cd-417d-8c73-9631356dec48\") " Dec 03 12:07:52 crc kubenswrapper[4756]: I1203 12:07:52.190510 4756 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c39d444-04cd-417d-8c73-9631356dec48-host\") on node \"crc\" DevicePath \"\"" Dec 03 12:07:52 crc kubenswrapper[4756]: I1203 12:07:52.197546 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c39d444-04cd-417d-8c73-9631356dec48-kube-api-access-z9kcq" (OuterVolumeSpecName: "kube-api-access-z9kcq") pod "4c39d444-04cd-417d-8c73-9631356dec48" (UID: "4c39d444-04cd-417d-8c73-9631356dec48"). InnerVolumeSpecName "kube-api-access-z9kcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:07:52 crc kubenswrapper[4756]: I1203 12:07:52.234344 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:07:52 crc kubenswrapper[4756]: E1203 12:07:52.234667 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:07:52 crc kubenswrapper[4756]: I1203 12:07:52.293123 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9kcq\" (UniqueName: \"kubernetes.io/projected/4c39d444-04cd-417d-8c73-9631356dec48-kube-api-access-z9kcq\") on node \"crc\" DevicePath \"\"" Dec 03 12:07:52 crc kubenswrapper[4756]: I1203 12:07:52.933841 4756 scope.go:117] "RemoveContainer" containerID="cd4a31197dd8abf34c66dff4d004d7cab614c74566d16716df094659204352f3" Dec 03 12:07:52 crc kubenswrapper[4756]: I1203 12:07:52.934107 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/crc-debug-gnvrf" Dec 03 12:07:53 crc kubenswrapper[4756]: I1203 12:07:53.246333 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c39d444-04cd-417d-8c73-9631356dec48" path="/var/lib/kubelet/pods/4c39d444-04cd-417d-8c73-9631356dec48/volumes" Dec 03 12:08:05 crc kubenswrapper[4756]: I1203 12:08:05.234111 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:08:05 crc kubenswrapper[4756]: E1203 12:08:05.234926 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:08:18 crc kubenswrapper[4756]: I1203 12:08:18.487428 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75c47c8598-6kchw_09b51d51-7a9c-4b73-a277-c488661e4af0/barbican-api/0.log" Dec 03 12:08:18 crc kubenswrapper[4756]: I1203 12:08:18.553826 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-75c47c8598-6kchw_09b51d51-7a9c-4b73-a277-c488661e4af0/barbican-api-log/0.log" Dec 03 12:08:18 crc kubenswrapper[4756]: I1203 12:08:18.665641 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6796bdbbcb-s99wh_ffa571bc-b5f1-4b8f-be29-9eae5f21db25/barbican-keystone-listener/0.log" Dec 03 12:08:18 crc kubenswrapper[4756]: I1203 12:08:18.812902 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6796bdbbcb-s99wh_ffa571bc-b5f1-4b8f-be29-9eae5f21db25/barbican-keystone-listener-log/0.log" Dec 03 12:08:18 crc kubenswrapper[4756]: I1203 12:08:18.921073 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-774fbcb69f-lqf5k_2fb0446e-9a35-4390-9290-b539e6a8718e/barbican-worker/0.log" Dec 03 12:08:19 crc kubenswrapper[4756]: I1203 12:08:19.174023 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-l7s9b_5e68e04a-ae64-43a6-8dd0-a3fdeb9642e2/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:19 crc kubenswrapper[4756]: I1203 12:08:19.204761 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-774fbcb69f-lqf5k_2fb0446e-9a35-4390-9290-b539e6a8718e/barbican-worker-log/0.log" Dec 03 12:08:19 crc kubenswrapper[4756]: I1203 12:08:19.244889 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:08:19 crc kubenswrapper[4756]: E1203 12:08:19.245310 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:08:19 crc kubenswrapper[4756]: I1203 12:08:19.371337 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3610ec2a-6a5b-4cae-9863-e5ab6f3267ed/ceilometer-central-agent/0.log" Dec 03 12:08:19 crc kubenswrapper[4756]: I1203 12:08:19.452982 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3610ec2a-6a5b-4cae-9863-e5ab6f3267ed/ceilometer-notification-agent/0.log" Dec 03 12:08:19 crc kubenswrapper[4756]: I1203 12:08:19.498870 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3610ec2a-6a5b-4cae-9863-e5ab6f3267ed/proxy-httpd/0.log" Dec 03 12:08:19 crc kubenswrapper[4756]: I1203 12:08:19.550294 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3610ec2a-6a5b-4cae-9863-e5ab6f3267ed/sg-core/0.log" Dec 03 12:08:19 crc kubenswrapper[4756]: I1203 12:08:19.916063 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_cada869c-6167-4fd2-b8ad-470d18f09cf4/cinder-api-log/0.log" Dec 03 12:08:19 crc kubenswrapper[4756]: I1203 12:08:19.978766 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_cada869c-6167-4fd2-b8ad-470d18f09cf4/cinder-api/0.log" Dec 03 12:08:20 crc kubenswrapper[4756]: I1203 12:08:20.237326 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ea68c0aa-0cf1-4437-ad1a-988b44fe4032/cinder-scheduler/0.log" Dec 03 12:08:20 crc kubenswrapper[4756]: I1203 12:08:20.259568 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ea68c0aa-0cf1-4437-ad1a-988b44fe4032/probe/0.log" Dec 03 12:08:20 crc kubenswrapper[4756]: I1203 12:08:20.374471 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-662qm_13546056-cc4c-4f52-81ab-79909380facb/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:20 crc kubenswrapper[4756]: I1203 12:08:20.577653 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-v4vhs_db190d36-8d79-4e74-8ba0-898731235d0a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:20 crc kubenswrapper[4756]: I1203 12:08:20.601640 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-rb9d7_cdfe5594-c723-4251-9daa-64c59d20f048/init/0.log" Dec 03 12:08:20 crc kubenswrapper[4756]: I1203 12:08:20.850414 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-rb9d7_cdfe5594-c723-4251-9daa-64c59d20f048/init/0.log" Dec 03 12:08:20 crc kubenswrapper[4756]: I1203 12:08:20.914248 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-rb9d7_cdfe5594-c723-4251-9daa-64c59d20f048/dnsmasq-dns/0.log" Dec 03 12:08:20 crc kubenswrapper[4756]: I1203 12:08:20.933156 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-6jd54_e9caa0fb-3dd9-4219-9657-e61712c336e8/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:21 crc kubenswrapper[4756]: I1203 12:08:21.131524 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b66578e3-f591-49d8-b76a-1fc8f8f0d262/glance-httpd/0.log" Dec 03 12:08:21 crc kubenswrapper[4756]: I1203 12:08:21.144874 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b66578e3-f591-49d8-b76a-1fc8f8f0d262/glance-log/0.log" Dec 03 12:08:21 crc kubenswrapper[4756]: I1203 12:08:21.358549 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_543ab57c-507b-4266-9105-e7e09e254311/glance-log/0.log" Dec 03 12:08:21 crc kubenswrapper[4756]: I1203 12:08:21.400129 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_543ab57c-507b-4266-9105-e7e09e254311/glance-httpd/0.log" Dec 03 12:08:21 crc kubenswrapper[4756]: I1203 12:08:21.571103 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-66bc647888-tcn4m_00c35a0d-70b4-453d-974a-85b638505280/horizon/1.log" Dec 03 12:08:21 crc kubenswrapper[4756]: I1203 12:08:21.815498 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-66bc647888-tcn4m_00c35a0d-70b4-453d-974a-85b638505280/horizon/0.log" Dec 03 12:08:21 crc kubenswrapper[4756]: I1203 12:08:21.868704 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mcwfs_42d479c1-c36c-4481-a4e1-7e155283859d/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:22 crc kubenswrapper[4756]: I1203 12:08:22.100669 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rfcxf_d75f87e1-8371-4657-853a-3ad9c89bbc74/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:22 crc kubenswrapper[4756]: I1203 12:08:22.127005 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-66bc647888-tcn4m_00c35a0d-70b4-453d-974a-85b638505280/horizon-log/0.log" Dec 03 12:08:22 crc kubenswrapper[4756]: I1203 12:08:22.375600 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29412721-gknsp_ff18af3a-68ab-4008-9f9d-cb5733c9a529/keystone-cron/0.log" Dec 03 12:08:22 crc kubenswrapper[4756]: I1203 12:08:22.590815 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7c6566fd84-f6lrg_9b1ddcb6-cd3d-438f-a007-a1527ab5be16/keystone-api/0.log" Dec 03 12:08:22 crc kubenswrapper[4756]: I1203 12:08:22.620139 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f5a21d10-a957-4ce1-b804-b75db51fe53c/kube-state-metrics/0.log" Dec 03 12:08:22 crc kubenswrapper[4756]: I1203 12:08:22.756328 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-bwpz5_23845452-bd2b-4841-b275-0adbc22178c1/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:23 crc kubenswrapper[4756]: I1203 12:08:23.434406 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-745d87f76f-mbrkc_4234afcf-96f0-4340-b5c0-e1aac6c4dacb/neutron-httpd/0.log" Dec 03 12:08:23 crc kubenswrapper[4756]: I1203 12:08:23.537858 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-zm2g7_e51da37d-c82b-43d5-b61c-5199ca9321d2/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:23 crc kubenswrapper[4756]: I1203 12:08:23.544338 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-745d87f76f-mbrkc_4234afcf-96f0-4340-b5c0-e1aac6c4dacb/neutron-api/0.log" Dec 03 12:08:24 crc kubenswrapper[4756]: I1203 12:08:24.219929 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9c016bc8-281b-4dcc-9475-9c373175f026/nova-api-log/0.log" Dec 03 12:08:24 crc kubenswrapper[4756]: I1203 12:08:24.335608 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9c862336-6a53-419e-aaf8-f64358150259/nova-cell0-conductor-conductor/0.log" Dec 03 12:08:24 crc kubenswrapper[4756]: I1203 12:08:24.622154 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9c016bc8-281b-4dcc-9475-9c373175f026/nova-api-api/0.log" Dec 03 12:08:25 crc kubenswrapper[4756]: I1203 12:08:25.237218 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4914547a-e3af-4ae6-8f6b-a210ed169dfd/nova-cell1-novncproxy-novncproxy/0.log" Dec 03 12:08:25 crc kubenswrapper[4756]: I1203 12:08:25.240529 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_7945f798-6dd7-4887-9cb3-72852427cf8e/nova-cell1-conductor-conductor/0.log" Dec 03 12:08:25 crc kubenswrapper[4756]: I1203 12:08:25.457097 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-jv7s6_edbbebac-e053-45ad-9b17-83d0e55fba86/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:25 crc kubenswrapper[4756]: I1203 12:08:25.661923 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aae26896-15a3-48e5-b96a-136209092056/nova-metadata-log/0.log" Dec 03 12:08:26 crc kubenswrapper[4756]: I1203 12:08:26.012359 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25a8024d-1033-41f9-a53f-6c5119388b40/mysql-bootstrap/0.log" Dec 03 12:08:26 crc kubenswrapper[4756]: I1203 12:08:26.083082 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_50ce8020-3f9b-4085-9f66-c05f682cde05/nova-scheduler-scheduler/0.log" Dec 03 12:08:26 crc kubenswrapper[4756]: I1203 12:08:26.854664 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25a8024d-1033-41f9-a53f-6c5119388b40/mysql-bootstrap/0.log" Dec 03 12:08:26 crc kubenswrapper[4756]: I1203 12:08:26.858132 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25a8024d-1033-41f9-a53f-6c5119388b40/galera/0.log" Dec 03 12:08:27 crc kubenswrapper[4756]: I1203 12:08:27.355036 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d6980bf2-fd5f-4cb1-b148-414229444006/mysql-bootstrap/0.log" Dec 03 12:08:27 crc kubenswrapper[4756]: I1203 12:08:27.523200 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aae26896-15a3-48e5-b96a-136209092056/nova-metadata-metadata/0.log" Dec 03 12:08:27 crc kubenswrapper[4756]: I1203 12:08:27.531092 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d6980bf2-fd5f-4cb1-b148-414229444006/mysql-bootstrap/0.log" Dec 03 12:08:27 crc kubenswrapper[4756]: I1203 12:08:27.556669 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d6980bf2-fd5f-4cb1-b148-414229444006/galera/0.log" Dec 03 12:08:27 crc kubenswrapper[4756]: I1203 12:08:27.770369 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8907533b-6dc9-48f9-8938-7089e2c0cbf5/openstackclient/0.log" Dec 03 12:08:27 crc kubenswrapper[4756]: I1203 12:08:27.931997 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-9vc6f_7b8a2775-a311-44e5-80da-356fcba8da63/openstack-network-exporter/0.log" Dec 03 12:08:28 crc kubenswrapper[4756]: I1203 12:08:28.047530 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-269cd_1a408782-00c0-46f6-8559-023f8753699e/ovsdb-server-init/0.log" Dec 03 12:08:28 crc kubenswrapper[4756]: I1203 12:08:28.282093 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-269cd_1a408782-00c0-46f6-8559-023f8753699e/ovs-vswitchd/0.log" Dec 03 12:08:28 crc kubenswrapper[4756]: I1203 12:08:28.382070 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-269cd_1a408782-00c0-46f6-8559-023f8753699e/ovsdb-server-init/0.log" Dec 03 12:08:28 crc kubenswrapper[4756]: I1203 12:08:28.390827 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-269cd_1a408782-00c0-46f6-8559-023f8753699e/ovsdb-server/0.log" Dec 03 12:08:28 crc kubenswrapper[4756]: I1203 12:08:28.569487 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xlz9h_e033887b-a32e-4141-9812-455b70f85d39/ovn-controller/0.log" Dec 03 12:08:28 crc kubenswrapper[4756]: I1203 12:08:28.667005 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-zrq6q_33ee231c-20f0-429c-92a2-7001e843e8b3/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:28 crc kubenswrapper[4756]: I1203 12:08:28.794800 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0976760e-f227-4d4e-a8a3-ed0ac129702c/openstack-network-exporter/0.log" Dec 03 12:08:28 crc kubenswrapper[4756]: I1203 12:08:28.988253 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0976760e-f227-4d4e-a8a3-ed0ac129702c/ovn-northd/0.log" Dec 03 12:08:29 crc kubenswrapper[4756]: I1203 12:08:29.026650 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e429c7e2-748f-4231-902c-00290ebe9eb9/openstack-network-exporter/0.log" Dec 03 12:08:29 crc kubenswrapper[4756]: I1203 12:08:29.050758 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e429c7e2-748f-4231-902c-00290ebe9eb9/ovsdbserver-nb/0.log" Dec 03 12:08:29 crc kubenswrapper[4756]: I1203 12:08:29.300040 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_77eb8ce5-5779-43bf-a57b-7ace73542f58/openstack-network-exporter/0.log" Dec 03 12:08:29 crc kubenswrapper[4756]: I1203 12:08:29.343856 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_77eb8ce5-5779-43bf-a57b-7ace73542f58/ovsdbserver-sb/0.log" Dec 03 12:08:29 crc kubenswrapper[4756]: I1203 12:08:29.735663 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1bfa9eab-e774-49f9-b1f6-f2afba51c9ae/setup-container/0.log" Dec 03 12:08:29 crc kubenswrapper[4756]: I1203 12:08:29.755533 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d57c9944-4rhnd_10ef169a-f6d1-4d7e-9ff1-8cca85adce2b/placement-api/0.log" Dec 03 12:08:29 crc kubenswrapper[4756]: I1203 12:08:29.891203 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d57c9944-4rhnd_10ef169a-f6d1-4d7e-9ff1-8cca85adce2b/placement-log/0.log" Dec 03 12:08:30 crc kubenswrapper[4756]: I1203 12:08:30.031438 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1bfa9eab-e774-49f9-b1f6-f2afba51c9ae/setup-container/0.log" Dec 03 12:08:30 crc kubenswrapper[4756]: I1203 12:08:30.095881 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1bfa9eab-e774-49f9-b1f6-f2afba51c9ae/rabbitmq/0.log" Dec 03 12:08:30 crc kubenswrapper[4756]: I1203 12:08:30.142296 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a347e9e2-376b-44ac-92de-25736c30ec1e/setup-container/0.log" Dec 03 12:08:30 crc kubenswrapper[4756]: I1203 12:08:30.402847 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a347e9e2-376b-44ac-92de-25736c30ec1e/setup-container/0.log" Dec 03 12:08:30 crc kubenswrapper[4756]: I1203 12:08:30.435367 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a347e9e2-376b-44ac-92de-25736c30ec1e/rabbitmq/0.log" Dec 03 12:08:30 crc kubenswrapper[4756]: I1203 12:08:30.454052 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-89wlj_8e9f7a37-9bb0-4f3e-bdfd-962164857651/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:30 crc kubenswrapper[4756]: I1203 12:08:30.617343 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xksxt_88321f96-08ea-4c4d-9665-8530b28e1a66/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:30 crc kubenswrapper[4756]: I1203 12:08:30.919690 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-klk22_1aab3068-a578-4f78-8326-53aeb4dd74bf/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:31 crc kubenswrapper[4756]: I1203 12:08:31.122666 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-wrchj_6252d786-f230-4eed-bc78-b688e07c12e7/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:31 crc kubenswrapper[4756]: I1203 12:08:31.252903 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-rmcgx_5b63b8f2-0259-45ce-b2c3-18b043ee2fab/ssh-known-hosts-edpm-deployment/0.log" Dec 03 12:08:31 crc kubenswrapper[4756]: I1203 12:08:31.561253 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-cdcf55d99-k7bmw_2f962509-8bee-4b75-a51f-f517ffa88908/proxy-server/0.log" Dec 03 12:08:31 crc kubenswrapper[4756]: I1203 12:08:31.603492 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-cdcf55d99-k7bmw_2f962509-8bee-4b75-a51f-f517ffa88908/proxy-httpd/0.log" Dec 03 12:08:31 crc kubenswrapper[4756]: I1203 12:08:31.658175 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-zbzgq_5cfbb2e2-672e-48fb-8916-ccb83e962bf3/swift-ring-rebalance/0.log" Dec 03 12:08:31 crc kubenswrapper[4756]: I1203 12:08:31.903246 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/account-auditor/0.log" Dec 03 12:08:31 crc kubenswrapper[4756]: I1203 12:08:31.934069 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/account-reaper/0.log" Dec 03 12:08:32 crc kubenswrapper[4756]: I1203 12:08:32.068147 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/account-replicator/0.log" Dec 03 12:08:32 crc kubenswrapper[4756]: I1203 12:08:32.112699 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/account-server/0.log" Dec 03 12:08:32 crc kubenswrapper[4756]: I1203 12:08:32.125276 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/container-auditor/0.log" Dec 03 12:08:32 crc kubenswrapper[4756]: I1203 12:08:32.228262 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/container-replicator/0.log" Dec 03 12:08:32 crc kubenswrapper[4756]: I1203 12:08:32.354660 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/container-server/0.log" Dec 03 12:08:32 crc kubenswrapper[4756]: I1203 12:08:32.387902 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/object-auditor/0.log" Dec 03 12:08:32 crc kubenswrapper[4756]: I1203 12:08:32.394283 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/container-updater/0.log" Dec 03 12:08:32 crc kubenswrapper[4756]: I1203 12:08:32.544899 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/object-expirer/0.log" Dec 03 12:08:32 crc kubenswrapper[4756]: I1203 12:08:32.683462 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/object-server/0.log" Dec 03 12:08:32 crc kubenswrapper[4756]: I1203 12:08:32.689882 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/object-updater/0.log" Dec 03 12:08:32 crc kubenswrapper[4756]: I1203 12:08:32.704523 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/object-replicator/0.log" Dec 03 12:08:32 crc kubenswrapper[4756]: I1203 12:08:32.825279 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/rsync/0.log" Dec 03 12:08:32 crc kubenswrapper[4756]: I1203 12:08:32.941072 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_99507e0d-929b-4d13-b820-5fd2869d776e/swift-recon-cron/0.log" Dec 03 12:08:33 crc kubenswrapper[4756]: I1203 12:08:33.070395 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-89lxh_9dab21b5-7428-46b5-8d98-956b18345f6d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:33 crc kubenswrapper[4756]: I1203 12:08:33.233974 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:08:33 crc kubenswrapper[4756]: E1203 12:08:33.234269 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:08:33 crc kubenswrapper[4756]: I1203 12:08:33.381409 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_f9a7368b-5739-4366-8a70-e33f19837e9a/tempest-tests-tempest-tests-runner/0.log" Dec 03 12:08:33 crc kubenswrapper[4756]: I1203 12:08:33.413194 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b9103e77-3037-4d26-946a-822bdd2ba611/test-operator-logs-container/0.log" Dec 03 12:08:33 crc kubenswrapper[4756]: I1203 12:08:33.673861 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-kt9mk_cc23e11e-fc64-4cce-8ab5-4e63f64ccb11/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 12:08:41 crc kubenswrapper[4756]: I1203 12:08:41.513079 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_81dedd61-3ae8-42b1-8af2-20fe40b22eb7/memcached/0.log" Dec 03 12:08:45 crc kubenswrapper[4756]: I1203 12:08:45.238504 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:08:45 crc kubenswrapper[4756]: E1203 12:08:45.240943 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:08:57 crc kubenswrapper[4756]: I1203 12:08:57.234536 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:08:57 crc kubenswrapper[4756]: E1203 12:08:57.235376 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:09:03 crc kubenswrapper[4756]: I1203 12:09:03.450258 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9_3a99ddd7-a877-4ce4-b97c-65350ab2af24/util/0.log" Dec 03 12:09:03 crc kubenswrapper[4756]: I1203 12:09:03.800351 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9_3a99ddd7-a877-4ce4-b97c-65350ab2af24/pull/0.log" Dec 03 12:09:03 crc kubenswrapper[4756]: I1203 12:09:03.864725 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9_3a99ddd7-a877-4ce4-b97c-65350ab2af24/util/0.log" Dec 03 12:09:03 crc kubenswrapper[4756]: I1203 12:09:03.871620 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9_3a99ddd7-a877-4ce4-b97c-65350ab2af24/pull/0.log" Dec 03 12:09:04 crc kubenswrapper[4756]: I1203 12:09:04.043056 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9_3a99ddd7-a877-4ce4-b97c-65350ab2af24/util/0.log" Dec 03 12:09:04 crc kubenswrapper[4756]: I1203 12:09:04.065882 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9_3a99ddd7-a877-4ce4-b97c-65350ab2af24/pull/0.log" Dec 03 12:09:04 crc kubenswrapper[4756]: I1203 12:09:04.073383 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_93478431aa2b3d3b80c27233e6c01b4e808bbc7d94ec8fef50d094ae1c686l9_3a99ddd7-a877-4ce4-b97c-65350ab2af24/extract/0.log" Dec 03 12:09:04 crc kubenswrapper[4756]: I1203 12:09:04.247812 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-cg2g8_02ff397c-db50-4b3b-be2c-b43dcd1c71db/kube-rbac-proxy/0.log" Dec 03 12:09:04 crc kubenswrapper[4756]: I1203 12:09:04.364575 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-cg2g8_02ff397c-db50-4b3b-be2c-b43dcd1c71db/manager/0.log" Dec 03 12:09:04 crc kubenswrapper[4756]: I1203 12:09:04.412238 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-vqlmr_22eded41-fc81-4a9c-b831-8cfb8d339258/kube-rbac-proxy/0.log" Dec 03 12:09:04 crc kubenswrapper[4756]: I1203 12:09:04.511457 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-vqlmr_22eded41-fc81-4a9c-b831-8cfb8d339258/manager/0.log" Dec 03 12:09:04 crc kubenswrapper[4756]: I1203 12:09:04.587771 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-62955_f651f26f-55f4-47cb-a318-0e2a9512f194/kube-rbac-proxy/0.log" Dec 03 12:09:04 crc kubenswrapper[4756]: I1203 12:09:04.662216 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-62955_f651f26f-55f4-47cb-a318-0e2a9512f194/manager/0.log" Dec 03 12:09:04 crc kubenswrapper[4756]: I1203 12:09:04.910582 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-q5klh_77626dde-3586-4b33-b4a4-326bab5bfe19/kube-rbac-proxy/0.log" Dec 03 12:09:04 crc kubenswrapper[4756]: I1203 12:09:04.916574 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-q5klh_77626dde-3586-4b33-b4a4-326bab5bfe19/manager/0.log" Dec 03 12:09:05 crc kubenswrapper[4756]: I1203 12:09:05.041323 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-gqmt7_0f3ba133-8575-4603-ad87-77502244b892/kube-rbac-proxy/0.log" Dec 03 12:09:05 crc kubenswrapper[4756]: I1203 12:09:05.118198 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-gqmt7_0f3ba133-8575-4603-ad87-77502244b892/manager/0.log" Dec 03 12:09:05 crc kubenswrapper[4756]: I1203 12:09:05.175923 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-crd8q_d605c327-c0d8-4466-b135-a1c8c777b91c/kube-rbac-proxy/0.log" Dec 03 12:09:05 crc kubenswrapper[4756]: I1203 12:09:05.300939 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-crd8q_d605c327-c0d8-4466-b135-a1c8c777b91c/manager/0.log" Dec 03 12:09:05 crc kubenswrapper[4756]: I1203 12:09:05.431642 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-hpd8p_0df6dfa3-00de-4da1-a132-358b5f6a66e9/kube-rbac-proxy/0.log" Dec 03 12:09:05 crc kubenswrapper[4756]: I1203 12:09:05.586256 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-hpd8p_0df6dfa3-00de-4da1-a132-358b5f6a66e9/manager/0.log" Dec 03 12:09:05 crc kubenswrapper[4756]: I1203 12:09:05.597713 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-hc7n7_786fcded-8103-4691-b8a4-fa6ef5b79ee6/kube-rbac-proxy/0.log" Dec 03 12:09:05 crc kubenswrapper[4756]: I1203 12:09:05.646390 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-hc7n7_786fcded-8103-4691-b8a4-fa6ef5b79ee6/manager/0.log" Dec 03 12:09:05 crc kubenswrapper[4756]: I1203 12:09:05.786170 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-bgw6p_ed53f929-2cac-4053-8875-ad53414156c1/kube-rbac-proxy/0.log" Dec 03 12:09:05 crc kubenswrapper[4756]: I1203 12:09:05.950606 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-bgw6p_ed53f929-2cac-4053-8875-ad53414156c1/manager/0.log" Dec 03 12:09:06 crc kubenswrapper[4756]: I1203 12:09:06.029042 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-dk746_419295e1-b487-4701-9fc8-0273a49277dc/kube-rbac-proxy/0.log" Dec 03 12:09:06 crc kubenswrapper[4756]: I1203 12:09:06.178086 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-2kq6w_0606b8cc-309a-4759-82d9-989ef224169b/kube-rbac-proxy/0.log" Dec 03 12:09:06 crc kubenswrapper[4756]: I1203 12:09:06.218751 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-dk746_419295e1-b487-4701-9fc8-0273a49277dc/manager/0.log" Dec 03 12:09:06 crc kubenswrapper[4756]: I1203 12:09:06.244508 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-2kq6w_0606b8cc-309a-4759-82d9-989ef224169b/manager/0.log" Dec 03 12:09:06 crc kubenswrapper[4756]: I1203 12:09:06.507138 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-xdlgn_ef02c569-7cc5-43a3-a4e9-d8c97cc07465/kube-rbac-proxy/0.log" Dec 03 12:09:06 crc kubenswrapper[4756]: I1203 12:09:06.566931 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-xdlgn_ef02c569-7cc5-43a3-a4e9-d8c97cc07465/manager/0.log" Dec 03 12:09:06 crc kubenswrapper[4756]: I1203 12:09:06.651939 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-cxgxw_53d098b0-4833-4b74-b13d-57a4d9c5ee13/kube-rbac-proxy/0.log" Dec 03 12:09:06 crc kubenswrapper[4756]: I1203 12:09:06.804668 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-wlfff_52cfac1b-a56f-4189-b9f0-5c4a8acf2069/kube-rbac-proxy/0.log" Dec 03 12:09:06 crc kubenswrapper[4756]: I1203 12:09:06.814277 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-cxgxw_53d098b0-4833-4b74-b13d-57a4d9c5ee13/manager/0.log" Dec 03 12:09:06 crc kubenswrapper[4756]: I1203 12:09:06.900029 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-wlfff_52cfac1b-a56f-4189-b9f0-5c4a8acf2069/manager/0.log" Dec 03 12:09:07 crc kubenswrapper[4756]: I1203 12:09:07.029529 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4_02b366f7-138d-4a78-9772-8e22db219753/kube-rbac-proxy/0.log" Dec 03 12:09:07 crc kubenswrapper[4756]: I1203 12:09:07.074966 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4scxg4_02b366f7-138d-4a78-9772-8e22db219753/manager/0.log" Dec 03 12:09:07 crc kubenswrapper[4756]: I1203 12:09:07.473834 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-779dc79ddf-bn5rj_3ae84629-3d85-49fb-a3d9-93c766c1be75/operator/0.log" Dec 03 12:09:07 crc kubenswrapper[4756]: I1203 12:09:07.726705 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5r2k8_1b7d149f-f0e8-4a81-b7c6-e581a6f12d06/registry-server/0.log" Dec 03 12:09:07 crc kubenswrapper[4756]: I1203 12:09:07.918282 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-bbqwn_ce02ca0e-abd9-4a57-a68d-7d35f304f8fa/kube-rbac-proxy/0.log" Dec 03 12:09:07 crc kubenswrapper[4756]: I1203 12:09:07.986005 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-bbqwn_ce02ca0e-abd9-4a57-a68d-7d35f304f8fa/manager/0.log" Dec 03 12:09:08 crc kubenswrapper[4756]: I1203 12:09:08.117771 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-9r6sc_ded35123-da5e-4a26-9caf-61d6c9d920cd/kube-rbac-proxy/0.log" Dec 03 12:09:08 crc kubenswrapper[4756]: I1203 12:09:08.209795 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-9r6sc_ded35123-da5e-4a26-9caf-61d6c9d920cd/manager/0.log" Dec 03 12:09:08 crc kubenswrapper[4756]: I1203 12:09:08.238593 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fkx8p_0b3df789-34ca-4ab2-8fb7-a8fee4df46a7/operator/0.log" Dec 03 12:09:08 crc kubenswrapper[4756]: I1203 12:09:08.433218 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6c5c989645-dsrph_d88ee2e4-3954-487f-991e-b0f3e66b176a/manager/0.log" Dec 03 12:09:08 crc kubenswrapper[4756]: I1203 12:09:08.926288 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-v4kqz_e8df208a-e66b-4532-bb65-8c673f2659bc/manager/0.log" Dec 03 12:09:08 crc kubenswrapper[4756]: I1203 12:09:08.945007 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-v4kqz_e8df208a-e66b-4532-bb65-8c673f2659bc/kube-rbac-proxy/0.log" Dec 03 12:09:08 crc kubenswrapper[4756]: I1203 12:09:08.971502 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-9t6v5_6c2df820-90b9-48fe-8dd0-8731028d0dbd/kube-rbac-proxy/0.log" Dec 03 12:09:08 crc kubenswrapper[4756]: I1203 12:09:08.988922 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-9t6v5_6c2df820-90b9-48fe-8dd0-8731028d0dbd/manager/0.log" Dec 03 12:09:09 crc kubenswrapper[4756]: I1203 12:09:09.138418 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-srv8g_a5dd2be8-1335-43f1-83af-2a0efabcce1e/kube-rbac-proxy/0.log" Dec 03 12:09:09 crc kubenswrapper[4756]: I1203 12:09:09.198261 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-g6zjw_77043fde-1e4d-4590-b97c-3de89953581a/kube-rbac-proxy/0.log" Dec 03 12:09:09 crc kubenswrapper[4756]: I1203 12:09:09.213729 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-srv8g_a5dd2be8-1335-43f1-83af-2a0efabcce1e/manager/0.log" Dec 03 12:09:09 crc kubenswrapper[4756]: I1203 12:09:09.396942 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-g6zjw_77043fde-1e4d-4590-b97c-3de89953581a/manager/0.log" Dec 03 12:09:11 crc kubenswrapper[4756]: I1203 12:09:11.234294 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:09:11 crc kubenswrapper[4756]: E1203 12:09:11.235150 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:09:24 crc kubenswrapper[4756]: I1203 12:09:24.234321 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:09:24 crc kubenswrapper[4756]: E1203 12:09:24.234990 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:09:30 crc kubenswrapper[4756]: I1203 12:09:30.084258 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-kpxd4_88056822-ddb3-47aa-b15e-f344471f6b0a/control-plane-machine-set-operator/0.log" Dec 03 12:09:30 crc kubenswrapper[4756]: I1203 12:09:30.237664 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gjjx2_35183c23-2ddd-4984-8ba9-d86765b138ce/kube-rbac-proxy/0.log" Dec 03 12:09:30 crc kubenswrapper[4756]: I1203 12:09:30.261883 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gjjx2_35183c23-2ddd-4984-8ba9-d86765b138ce/machine-api-operator/0.log" Dec 03 12:09:39 crc kubenswrapper[4756]: I1203 12:09:39.240740 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:09:39 crc kubenswrapper[4756]: E1203 12:09:39.241572 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:09:44 crc kubenswrapper[4756]: I1203 12:09:44.338490 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-bxjtd_6ab38bf9-5ba7-4205-82b7-30337cd2694f/cert-manager-controller/0.log" Dec 03 12:09:44 crc kubenswrapper[4756]: I1203 12:09:44.513580 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-lpmgs_15eea9f7-71bc-4b1d-810a-8dd3da3015f2/cert-manager-cainjector/0.log" Dec 03 12:09:44 crc kubenswrapper[4756]: I1203 12:09:44.541836 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-j2l27_23b87262-d9c4-45f6-8cc7-711f71e1a6c0/cert-manager-webhook/0.log" Dec 03 12:09:50 crc kubenswrapper[4756]: I1203 12:09:50.234427 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:09:50 crc kubenswrapper[4756]: E1203 12:09:50.235702 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:09:58 crc kubenswrapper[4756]: I1203 12:09:58.374742 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-9497w_ef21e85d-40d5-4131-af1d-5bc35102ef29/nmstate-console-plugin/0.log" Dec 03 12:09:58 crc kubenswrapper[4756]: I1203 12:09:58.568821 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8f7l7_379d2dc0-2b74-4c9e-936e-160b41e74098/nmstate-handler/0.log" Dec 03 12:09:58 crc kubenswrapper[4756]: I1203 12:09:58.637168 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-znss6_f3c59b4b-7ffa-46bb-a92c-d8ae4218335d/kube-rbac-proxy/0.log" Dec 03 12:09:58 crc kubenswrapper[4756]: I1203 12:09:58.653346 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-znss6_f3c59b4b-7ffa-46bb-a92c-d8ae4218335d/nmstate-metrics/0.log" Dec 03 12:09:58 crc kubenswrapper[4756]: I1203 12:09:58.851884 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-49m5x_3ff12a67-01d3-4dcd-9528-4625113befa2/nmstate-operator/0.log" Dec 03 12:09:58 crc kubenswrapper[4756]: I1203 12:09:58.896780 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-ztzcc_5e29e32a-6823-448d-9af0-1b4aa213a0d2/nmstate-webhook/0.log" Dec 03 12:10:01 crc kubenswrapper[4756]: I1203 12:10:01.238729 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:10:01 crc kubenswrapper[4756]: E1203 12:10:01.239557 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:10:15 crc kubenswrapper[4756]: I1203 12:10:15.233884 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:10:15 crc kubenswrapper[4756]: E1203 12:10:15.234916 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:10:15 crc kubenswrapper[4756]: I1203 12:10:15.265251 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-bq2xc_dbcde851-76fe-4be1-ae50-ed62ebcc75a3/kube-rbac-proxy/0.log" Dec 03 12:10:15 crc kubenswrapper[4756]: I1203 12:10:15.376033 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-bq2xc_dbcde851-76fe-4be1-ae50-ed62ebcc75a3/controller/0.log" Dec 03 12:10:15 crc kubenswrapper[4756]: I1203 12:10:15.500458 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-frr-files/0.log" Dec 03 12:10:15 crc kubenswrapper[4756]: I1203 12:10:15.707103 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-reloader/0.log" Dec 03 12:10:15 crc kubenswrapper[4756]: I1203 12:10:15.756333 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-frr-files/0.log" Dec 03 12:10:15 crc kubenswrapper[4756]: I1203 12:10:15.775274 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-metrics/0.log" Dec 03 12:10:15 crc kubenswrapper[4756]: I1203 12:10:15.786330 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-reloader/0.log" Dec 03 12:10:15 crc kubenswrapper[4756]: I1203 12:10:15.997384 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-metrics/0.log" Dec 03 12:10:15 crc kubenswrapper[4756]: I1203 12:10:15.997497 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-metrics/0.log" Dec 03 12:10:15 crc kubenswrapper[4756]: I1203 12:10:15.999576 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-reloader/0.log" Dec 03 12:10:16 crc kubenswrapper[4756]: I1203 12:10:16.010919 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-frr-files/0.log" Dec 03 12:10:16 crc kubenswrapper[4756]: I1203 12:10:16.213041 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-reloader/0.log" Dec 03 12:10:16 crc kubenswrapper[4756]: I1203 12:10:16.235159 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-frr-files/0.log" Dec 03 12:10:16 crc kubenswrapper[4756]: I1203 12:10:16.247374 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/cp-metrics/0.log" Dec 03 12:10:16 crc kubenswrapper[4756]: I1203 12:10:16.273712 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/controller/0.log" Dec 03 12:10:16 crc kubenswrapper[4756]: I1203 12:10:16.450260 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/frr-metrics/0.log" Dec 03 12:10:16 crc kubenswrapper[4756]: I1203 12:10:16.492471 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/kube-rbac-proxy/0.log" Dec 03 12:10:16 crc kubenswrapper[4756]: I1203 12:10:16.597138 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/kube-rbac-proxy-frr/0.log" Dec 03 12:10:16 crc kubenswrapper[4756]: I1203 12:10:16.737996 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/reloader/0.log" Dec 03 12:10:16 crc kubenswrapper[4756]: I1203 12:10:16.854583 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-v6jcq_4686b721-17d6-4951-9107-81ba7c5f1658/frr-k8s-webhook-server/0.log" Dec 03 12:10:17 crc kubenswrapper[4756]: I1203 12:10:17.066907 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-fdcbcf598-mh4g6_ca73ab13-f16d-40fe-b7a1-f2c5b93e7456/manager/0.log" Dec 03 12:10:18 crc kubenswrapper[4756]: I1203 12:10:18.074491 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7n2sf_ca66dbbe-4a6d-460a-8ecc-2fd16d2a32cf/frr/0.log" Dec 03 12:10:18 crc kubenswrapper[4756]: I1203 12:10:18.182667 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6b69d8d987-v5lqb_3a1c84bb-c40c-4af4-80a4-75991c028724/webhook-server/0.log" Dec 03 12:10:18 crc kubenswrapper[4756]: I1203 12:10:18.221105 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7k2pt_dec16985-cd5e-425b-a72d-9a13e835c965/kube-rbac-proxy/0.log" Dec 03 12:10:18 crc kubenswrapper[4756]: I1203 12:10:18.585000 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7k2pt_dec16985-cd5e-425b-a72d-9a13e835c965/speaker/0.log" Dec 03 12:10:30 crc kubenswrapper[4756]: I1203 12:10:30.234399 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:10:30 crc kubenswrapper[4756]: E1203 12:10:30.235487 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:10:31 crc kubenswrapper[4756]: I1203 12:10:31.970206 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb_660ad08f-eaf7-484d-a630-05b51ac13d57/util/0.log" Dec 03 12:10:32 crc kubenswrapper[4756]: I1203 12:10:32.195305 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb_660ad08f-eaf7-484d-a630-05b51ac13d57/util/0.log" Dec 03 12:10:32 crc kubenswrapper[4756]: I1203 12:10:32.254527 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb_660ad08f-eaf7-484d-a630-05b51ac13d57/pull/0.log" Dec 03 12:10:32 crc kubenswrapper[4756]: I1203 12:10:32.262185 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb_660ad08f-eaf7-484d-a630-05b51ac13d57/pull/0.log" Dec 03 12:10:32 crc kubenswrapper[4756]: I1203 12:10:32.456657 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb_660ad08f-eaf7-484d-a630-05b51ac13d57/util/0.log" Dec 03 12:10:32 crc kubenswrapper[4756]: I1203 12:10:32.469096 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb_660ad08f-eaf7-484d-a630-05b51ac13d57/pull/0.log" Dec 03 12:10:32 crc kubenswrapper[4756]: I1203 12:10:32.479508 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fw2ztb_660ad08f-eaf7-484d-a630-05b51ac13d57/extract/0.log" Dec 03 12:10:32 crc kubenswrapper[4756]: I1203 12:10:32.634447 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84_c8b2a0d1-86e1-4b61-92df-c41e299a3f58/util/0.log" Dec 03 12:10:32 crc kubenswrapper[4756]: I1203 12:10:32.842108 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84_c8b2a0d1-86e1-4b61-92df-c41e299a3f58/pull/0.log" Dec 03 12:10:32 crc kubenswrapper[4756]: I1203 12:10:32.883100 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84_c8b2a0d1-86e1-4b61-92df-c41e299a3f58/pull/0.log" Dec 03 12:10:32 crc kubenswrapper[4756]: I1203 12:10:32.910157 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84_c8b2a0d1-86e1-4b61-92df-c41e299a3f58/util/0.log" Dec 03 12:10:33 crc kubenswrapper[4756]: I1203 12:10:33.045787 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84_c8b2a0d1-86e1-4b61-92df-c41e299a3f58/util/0.log" Dec 03 12:10:33 crc kubenswrapper[4756]: I1203 12:10:33.106476 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84_c8b2a0d1-86e1-4b61-92df-c41e299a3f58/pull/0.log" Dec 03 12:10:33 crc kubenswrapper[4756]: I1203 12:10:33.113168 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839hb84_c8b2a0d1-86e1-4b61-92df-c41e299a3f58/extract/0.log" Dec 03 12:10:33 crc kubenswrapper[4756]: I1203 12:10:33.228975 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ssjdj_3609b14b-ffbe-45d5-818d-d6a01bf0b5d1/extract-utilities/0.log" Dec 03 12:10:33 crc kubenswrapper[4756]: I1203 12:10:33.447716 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ssjdj_3609b14b-ffbe-45d5-818d-d6a01bf0b5d1/extract-utilities/0.log" Dec 03 12:10:33 crc kubenswrapper[4756]: I1203 12:10:33.447783 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ssjdj_3609b14b-ffbe-45d5-818d-d6a01bf0b5d1/extract-content/0.log" Dec 03 12:10:33 crc kubenswrapper[4756]: I1203 12:10:33.495393 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ssjdj_3609b14b-ffbe-45d5-818d-d6a01bf0b5d1/extract-content/0.log" Dec 03 12:10:33 crc kubenswrapper[4756]: I1203 12:10:33.645807 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ssjdj_3609b14b-ffbe-45d5-818d-d6a01bf0b5d1/extract-utilities/0.log" Dec 03 12:10:33 crc kubenswrapper[4756]: I1203 12:10:33.652578 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ssjdj_3609b14b-ffbe-45d5-818d-d6a01bf0b5d1/extract-content/0.log" Dec 03 12:10:33 crc kubenswrapper[4756]: I1203 12:10:33.862653 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ssjdj_3609b14b-ffbe-45d5-818d-d6a01bf0b5d1/registry-server/0.log" Dec 03 12:10:33 crc kubenswrapper[4756]: I1203 12:10:33.873183 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t6jxs_6cb28c3f-5e91-4b53-8eb9-7878c29595a2/extract-utilities/0.log" Dec 03 12:10:34 crc kubenswrapper[4756]: I1203 12:10:34.111362 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t6jxs_6cb28c3f-5e91-4b53-8eb9-7878c29595a2/extract-content/0.log" Dec 03 12:10:34 crc kubenswrapper[4756]: I1203 12:10:34.116648 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t6jxs_6cb28c3f-5e91-4b53-8eb9-7878c29595a2/extract-utilities/0.log" Dec 03 12:10:34 crc kubenswrapper[4756]: I1203 12:10:34.121254 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t6jxs_6cb28c3f-5e91-4b53-8eb9-7878c29595a2/extract-content/0.log" Dec 03 12:10:34 crc kubenswrapper[4756]: I1203 12:10:34.306973 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t6jxs_6cb28c3f-5e91-4b53-8eb9-7878c29595a2/extract-utilities/0.log" Dec 03 12:10:34 crc kubenswrapper[4756]: I1203 12:10:34.328557 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t6jxs_6cb28c3f-5e91-4b53-8eb9-7878c29595a2/extract-content/0.log" Dec 03 12:10:34 crc kubenswrapper[4756]: I1203 12:10:34.585915 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d88tf_98d8f022-96b9-4992-847a-52bf83ddb778/extract-utilities/0.log" Dec 03 12:10:34 crc kubenswrapper[4756]: I1203 12:10:34.604232 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-r52tb_7f5dea91-6dce-4093-a943-05e3359b754d/marketplace-operator/0.log" Dec 03 12:10:34 crc kubenswrapper[4756]: I1203 12:10:34.860196 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d88tf_98d8f022-96b9-4992-847a-52bf83ddb778/extract-content/0.log" Dec 03 12:10:34 crc kubenswrapper[4756]: I1203 12:10:34.892978 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d88tf_98d8f022-96b9-4992-847a-52bf83ddb778/extract-utilities/0.log" Dec 03 12:10:34 crc kubenswrapper[4756]: I1203 12:10:34.893767 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d88tf_98d8f022-96b9-4992-847a-52bf83ddb778/extract-content/0.log" Dec 03 12:10:35 crc kubenswrapper[4756]: I1203 12:10:35.017022 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t6jxs_6cb28c3f-5e91-4b53-8eb9-7878c29595a2/registry-server/0.log" Dec 03 12:10:35 crc kubenswrapper[4756]: I1203 12:10:35.125261 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d88tf_98d8f022-96b9-4992-847a-52bf83ddb778/extract-content/0.log" Dec 03 12:10:35 crc kubenswrapper[4756]: I1203 12:10:35.136240 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d88tf_98d8f022-96b9-4992-847a-52bf83ddb778/extract-utilities/0.log" Dec 03 12:10:35 crc kubenswrapper[4756]: I1203 12:10:35.282688 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rncfc_7dad8b84-7bcf-411d-87e7-f91db9494b86/extract-utilities/0.log" Dec 03 12:10:35 crc kubenswrapper[4756]: I1203 12:10:35.303417 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d88tf_98d8f022-96b9-4992-847a-52bf83ddb778/registry-server/0.log" Dec 03 12:10:35 crc kubenswrapper[4756]: I1203 12:10:35.521116 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rncfc_7dad8b84-7bcf-411d-87e7-f91db9494b86/extract-content/0.log" Dec 03 12:10:35 crc kubenswrapper[4756]: I1203 12:10:35.527185 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rncfc_7dad8b84-7bcf-411d-87e7-f91db9494b86/extract-utilities/0.log" Dec 03 12:10:35 crc kubenswrapper[4756]: I1203 12:10:35.543009 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rncfc_7dad8b84-7bcf-411d-87e7-f91db9494b86/extract-content/0.log" Dec 03 12:10:35 crc kubenswrapper[4756]: I1203 12:10:35.752464 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rncfc_7dad8b84-7bcf-411d-87e7-f91db9494b86/extract-utilities/0.log" Dec 03 12:10:35 crc kubenswrapper[4756]: I1203 12:10:35.784182 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rncfc_7dad8b84-7bcf-411d-87e7-f91db9494b86/extract-content/0.log" Dec 03 12:10:36 crc kubenswrapper[4756]: I1203 12:10:36.356537 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rncfc_7dad8b84-7bcf-411d-87e7-f91db9494b86/registry-server/0.log" Dec 03 12:10:44 crc kubenswrapper[4756]: I1203 12:10:44.235184 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:10:44 crc kubenswrapper[4756]: E1203 12:10:44.236153 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:10:48 crc kubenswrapper[4756]: I1203 12:10:48.736622 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-cdcf55d99-k7bmw" podUID="2f962509-8bee-4b75-a51f-f517ffa88908" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.381007 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j2mqf"] Dec 03 12:10:58 crc kubenswrapper[4756]: E1203 12:10:58.382234 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c39d444-04cd-417d-8c73-9631356dec48" containerName="container-00" Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.382253 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c39d444-04cd-417d-8c73-9631356dec48" containerName="container-00" Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.382600 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c39d444-04cd-417d-8c73-9631356dec48" containerName="container-00" Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.385099 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.393178 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2mqf"] Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.435355 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab75f90-6b24-499d-b614-784268353bd2-utilities\") pod \"redhat-operators-j2mqf\" (UID: \"4ab75f90-6b24-499d-b614-784268353bd2\") " pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.435471 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab75f90-6b24-499d-b614-784268353bd2-catalog-content\") pod \"redhat-operators-j2mqf\" (UID: \"4ab75f90-6b24-499d-b614-784268353bd2\") " pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.435585 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qm8d\" (UniqueName: \"kubernetes.io/projected/4ab75f90-6b24-499d-b614-784268353bd2-kube-api-access-6qm8d\") pod \"redhat-operators-j2mqf\" (UID: \"4ab75f90-6b24-499d-b614-784268353bd2\") " pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.537701 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab75f90-6b24-499d-b614-784268353bd2-utilities\") pod \"redhat-operators-j2mqf\" (UID: \"4ab75f90-6b24-499d-b614-784268353bd2\") " pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.537819 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab75f90-6b24-499d-b614-784268353bd2-catalog-content\") pod \"redhat-operators-j2mqf\" (UID: \"4ab75f90-6b24-499d-b614-784268353bd2\") " pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.537917 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qm8d\" (UniqueName: \"kubernetes.io/projected/4ab75f90-6b24-499d-b614-784268353bd2-kube-api-access-6qm8d\") pod \"redhat-operators-j2mqf\" (UID: \"4ab75f90-6b24-499d-b614-784268353bd2\") " pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.538697 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab75f90-6b24-499d-b614-784268353bd2-catalog-content\") pod \"redhat-operators-j2mqf\" (UID: \"4ab75f90-6b24-499d-b614-784268353bd2\") " pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.538860 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab75f90-6b24-499d-b614-784268353bd2-utilities\") pod \"redhat-operators-j2mqf\" (UID: \"4ab75f90-6b24-499d-b614-784268353bd2\") " pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.556161 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qm8d\" (UniqueName: \"kubernetes.io/projected/4ab75f90-6b24-499d-b614-784268353bd2-kube-api-access-6qm8d\") pod \"redhat-operators-j2mqf\" (UID: \"4ab75f90-6b24-499d-b614-784268353bd2\") " pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:10:58 crc kubenswrapper[4756]: I1203 12:10:58.706321 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:10:59 crc kubenswrapper[4756]: I1203 12:10:59.248039 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:10:59 crc kubenswrapper[4756]: E1203 12:10:59.248887 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:10:59 crc kubenswrapper[4756]: I1203 12:10:59.323651 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2mqf"] Dec 03 12:10:59 crc kubenswrapper[4756]: I1203 12:10:59.901326 4756 generic.go:334] "Generic (PLEG): container finished" podID="4ab75f90-6b24-499d-b614-784268353bd2" containerID="b8cc40a5e2e271bddf312a20d7c2d834a3036f339dfb0e14a282d902e1b6bed4" exitCode=0 Dec 03 12:10:59 crc kubenswrapper[4756]: I1203 12:10:59.901392 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2mqf" event={"ID":"4ab75f90-6b24-499d-b614-784268353bd2","Type":"ContainerDied","Data":"b8cc40a5e2e271bddf312a20d7c2d834a3036f339dfb0e14a282d902e1b6bed4"} Dec 03 12:10:59 crc kubenswrapper[4756]: I1203 12:10:59.901856 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2mqf" event={"ID":"4ab75f90-6b24-499d-b614-784268353bd2","Type":"ContainerStarted","Data":"04d6e549d2f3617d24be6423795079d118bc12b1e8839fbfb4b8af9d05e81d74"} Dec 03 12:10:59 crc kubenswrapper[4756]: I1203 12:10:59.905561 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 12:11:01 crc kubenswrapper[4756]: I1203 12:11:01.933154 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2mqf" event={"ID":"4ab75f90-6b24-499d-b614-784268353bd2","Type":"ContainerStarted","Data":"a3f0744982d6ee66c86ce1468fc30201fe0dd5d69ed55ae4a7ddcdf191cda212"} Dec 03 12:11:02 crc kubenswrapper[4756]: I1203 12:11:02.943134 4756 generic.go:334] "Generic (PLEG): container finished" podID="4ab75f90-6b24-499d-b614-784268353bd2" containerID="a3f0744982d6ee66c86ce1468fc30201fe0dd5d69ed55ae4a7ddcdf191cda212" exitCode=0 Dec 03 12:11:02 crc kubenswrapper[4756]: I1203 12:11:02.943188 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2mqf" event={"ID":"4ab75f90-6b24-499d-b614-784268353bd2","Type":"ContainerDied","Data":"a3f0744982d6ee66c86ce1468fc30201fe0dd5d69ed55ae4a7ddcdf191cda212"} Dec 03 12:11:04 crc kubenswrapper[4756]: I1203 12:11:04.964183 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2mqf" event={"ID":"4ab75f90-6b24-499d-b614-784268353bd2","Type":"ContainerStarted","Data":"e8aca440ba5af86d01825aabf227bfa21033e4dfe56c9623a64717975f27db4a"} Dec 03 12:11:04 crc kubenswrapper[4756]: I1203 12:11:04.990072 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j2mqf" podStartSLOduration=3.090978669 podStartE2EDuration="6.990049531s" podCreationTimestamp="2025-12-03 12:10:58 +0000 UTC" firstStartedPulling="2025-12-03 12:10:59.905267772 +0000 UTC m=+4670.935269016" lastFinishedPulling="2025-12-03 12:11:03.804338624 +0000 UTC m=+4674.834339878" observedRunningTime="2025-12-03 12:11:04.98148039 +0000 UTC m=+4676.011481644" watchObservedRunningTime="2025-12-03 12:11:04.990049531 +0000 UTC m=+4676.020050775" Dec 03 12:11:08 crc kubenswrapper[4756]: I1203 12:11:08.708263 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:11:08 crc kubenswrapper[4756]: I1203 12:11:08.708865 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:11:09 crc kubenswrapper[4756]: I1203 12:11:09.759868 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j2mqf" podUID="4ab75f90-6b24-499d-b614-784268353bd2" containerName="registry-server" probeResult="failure" output=< Dec 03 12:11:09 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Dec 03 12:11:09 crc kubenswrapper[4756]: > Dec 03 12:11:14 crc kubenswrapper[4756]: I1203 12:11:14.233629 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:11:14 crc kubenswrapper[4756]: E1203 12:11:14.234639 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:11:18 crc kubenswrapper[4756]: I1203 12:11:18.756231 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:11:19 crc kubenswrapper[4756]: I1203 12:11:19.319781 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:11:19 crc kubenswrapper[4756]: I1203 12:11:19.366517 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2mqf"] Dec 03 12:11:20 crc kubenswrapper[4756]: I1203 12:11:20.100360 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j2mqf" podUID="4ab75f90-6b24-499d-b614-784268353bd2" containerName="registry-server" containerID="cri-o://e8aca440ba5af86d01825aabf227bfa21033e4dfe56c9623a64717975f27db4a" gracePeriod=2 Dec 03 12:11:20 crc kubenswrapper[4756]: I1203 12:11:20.587871 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:11:20 crc kubenswrapper[4756]: I1203 12:11:20.705699 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qm8d\" (UniqueName: \"kubernetes.io/projected/4ab75f90-6b24-499d-b614-784268353bd2-kube-api-access-6qm8d\") pod \"4ab75f90-6b24-499d-b614-784268353bd2\" (UID: \"4ab75f90-6b24-499d-b614-784268353bd2\") " Dec 03 12:11:20 crc kubenswrapper[4756]: I1203 12:11:20.705795 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab75f90-6b24-499d-b614-784268353bd2-catalog-content\") pod \"4ab75f90-6b24-499d-b614-784268353bd2\" (UID: \"4ab75f90-6b24-499d-b614-784268353bd2\") " Dec 03 12:11:20 crc kubenswrapper[4756]: I1203 12:11:20.705919 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab75f90-6b24-499d-b614-784268353bd2-utilities\") pod \"4ab75f90-6b24-499d-b614-784268353bd2\" (UID: \"4ab75f90-6b24-499d-b614-784268353bd2\") " Dec 03 12:11:20 crc kubenswrapper[4756]: I1203 12:11:20.707598 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ab75f90-6b24-499d-b614-784268353bd2-utilities" (OuterVolumeSpecName: "utilities") pod "4ab75f90-6b24-499d-b614-784268353bd2" (UID: "4ab75f90-6b24-499d-b614-784268353bd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:11:20 crc kubenswrapper[4756]: I1203 12:11:20.713318 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab75f90-6b24-499d-b614-784268353bd2-kube-api-access-6qm8d" (OuterVolumeSpecName: "kube-api-access-6qm8d") pod "4ab75f90-6b24-499d-b614-784268353bd2" (UID: "4ab75f90-6b24-499d-b614-784268353bd2"). InnerVolumeSpecName "kube-api-access-6qm8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:11:20 crc kubenswrapper[4756]: I1203 12:11:20.807733 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qm8d\" (UniqueName: \"kubernetes.io/projected/4ab75f90-6b24-499d-b614-784268353bd2-kube-api-access-6qm8d\") on node \"crc\" DevicePath \"\"" Dec 03 12:11:20 crc kubenswrapper[4756]: I1203 12:11:20.807780 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab75f90-6b24-499d-b614-784268353bd2-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 12:11:20 crc kubenswrapper[4756]: I1203 12:11:20.847500 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ab75f90-6b24-499d-b614-784268353bd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ab75f90-6b24-499d-b614-784268353bd2" (UID: "4ab75f90-6b24-499d-b614-784268353bd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:11:20 crc kubenswrapper[4756]: I1203 12:11:20.910100 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab75f90-6b24-499d-b614-784268353bd2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.113897 4756 generic.go:334] "Generic (PLEG): container finished" podID="4ab75f90-6b24-499d-b614-784268353bd2" containerID="e8aca440ba5af86d01825aabf227bfa21033e4dfe56c9623a64717975f27db4a" exitCode=0 Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.113977 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2mqf" Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.113945 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2mqf" event={"ID":"4ab75f90-6b24-499d-b614-784268353bd2","Type":"ContainerDied","Data":"e8aca440ba5af86d01825aabf227bfa21033e4dfe56c9623a64717975f27db4a"} Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.114119 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2mqf" event={"ID":"4ab75f90-6b24-499d-b614-784268353bd2","Type":"ContainerDied","Data":"04d6e549d2f3617d24be6423795079d118bc12b1e8839fbfb4b8af9d05e81d74"} Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.114141 4756 scope.go:117] "RemoveContainer" containerID="e8aca440ba5af86d01825aabf227bfa21033e4dfe56c9623a64717975f27db4a" Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.133670 4756 scope.go:117] "RemoveContainer" containerID="a3f0744982d6ee66c86ce1468fc30201fe0dd5d69ed55ae4a7ddcdf191cda212" Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.150200 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2mqf"] Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.160523 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j2mqf"] Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.170348 4756 scope.go:117] "RemoveContainer" containerID="b8cc40a5e2e271bddf312a20d7c2d834a3036f339dfb0e14a282d902e1b6bed4" Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.205636 4756 scope.go:117] "RemoveContainer" containerID="e8aca440ba5af86d01825aabf227bfa21033e4dfe56c9623a64717975f27db4a" Dec 03 12:11:21 crc kubenswrapper[4756]: E1203 12:11:21.206439 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8aca440ba5af86d01825aabf227bfa21033e4dfe56c9623a64717975f27db4a\": container with ID starting with e8aca440ba5af86d01825aabf227bfa21033e4dfe56c9623a64717975f27db4a not found: ID does not exist" containerID="e8aca440ba5af86d01825aabf227bfa21033e4dfe56c9623a64717975f27db4a" Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.206472 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8aca440ba5af86d01825aabf227bfa21033e4dfe56c9623a64717975f27db4a"} err="failed to get container status \"e8aca440ba5af86d01825aabf227bfa21033e4dfe56c9623a64717975f27db4a\": rpc error: code = NotFound desc = could not find container \"e8aca440ba5af86d01825aabf227bfa21033e4dfe56c9623a64717975f27db4a\": container with ID starting with e8aca440ba5af86d01825aabf227bfa21033e4dfe56c9623a64717975f27db4a not found: ID does not exist" Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.206495 4756 scope.go:117] "RemoveContainer" containerID="a3f0744982d6ee66c86ce1468fc30201fe0dd5d69ed55ae4a7ddcdf191cda212" Dec 03 12:11:21 crc kubenswrapper[4756]: E1203 12:11:21.210346 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3f0744982d6ee66c86ce1468fc30201fe0dd5d69ed55ae4a7ddcdf191cda212\": container with ID starting with a3f0744982d6ee66c86ce1468fc30201fe0dd5d69ed55ae4a7ddcdf191cda212 not found: ID does not exist" containerID="a3f0744982d6ee66c86ce1468fc30201fe0dd5d69ed55ae4a7ddcdf191cda212" Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.210372 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f0744982d6ee66c86ce1468fc30201fe0dd5d69ed55ae4a7ddcdf191cda212"} err="failed to get container status \"a3f0744982d6ee66c86ce1468fc30201fe0dd5d69ed55ae4a7ddcdf191cda212\": rpc error: code = NotFound desc = could not find container \"a3f0744982d6ee66c86ce1468fc30201fe0dd5d69ed55ae4a7ddcdf191cda212\": container with ID starting with a3f0744982d6ee66c86ce1468fc30201fe0dd5d69ed55ae4a7ddcdf191cda212 not found: ID does not exist" Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.210386 4756 scope.go:117] "RemoveContainer" containerID="b8cc40a5e2e271bddf312a20d7c2d834a3036f339dfb0e14a282d902e1b6bed4" Dec 03 12:11:21 crc kubenswrapper[4756]: E1203 12:11:21.210767 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8cc40a5e2e271bddf312a20d7c2d834a3036f339dfb0e14a282d902e1b6bed4\": container with ID starting with b8cc40a5e2e271bddf312a20d7c2d834a3036f339dfb0e14a282d902e1b6bed4 not found: ID does not exist" containerID="b8cc40a5e2e271bddf312a20d7c2d834a3036f339dfb0e14a282d902e1b6bed4" Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.210788 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8cc40a5e2e271bddf312a20d7c2d834a3036f339dfb0e14a282d902e1b6bed4"} err="failed to get container status \"b8cc40a5e2e271bddf312a20d7c2d834a3036f339dfb0e14a282d902e1b6bed4\": rpc error: code = NotFound desc = could not find container \"b8cc40a5e2e271bddf312a20d7c2d834a3036f339dfb0e14a282d902e1b6bed4\": container with ID starting with b8cc40a5e2e271bddf312a20d7c2d834a3036f339dfb0e14a282d902e1b6bed4 not found: ID does not exist" Dec 03 12:11:21 crc kubenswrapper[4756]: I1203 12:11:21.246081 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab75f90-6b24-499d-b614-784268353bd2" path="/var/lib/kubelet/pods/4ab75f90-6b24-499d-b614-784268353bd2/volumes" Dec 03 12:11:29 crc kubenswrapper[4756]: I1203 12:11:29.240257 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:11:29 crc kubenswrapper[4756]: E1203 12:11:29.241170 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:11:44 crc kubenswrapper[4756]: I1203 12:11:44.233903 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:11:44 crc kubenswrapper[4756]: E1203 12:11:44.234744 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pppvw_openshift-machine-config-operator(f4cc39f5-d4a1-4174-8d5f-56126872107f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" Dec 03 12:11:59 crc kubenswrapper[4756]: I1203 12:11:59.269974 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f" Dec 03 12:11:59 crc kubenswrapper[4756]: I1203 12:11:59.517538 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"2f851b82450910cfc2eca385f72d53bd053f763a888a9de5f29896c93a160e5f"} Dec 03 12:12:25 crc kubenswrapper[4756]: I1203 12:12:25.777532 4756 generic.go:334] "Generic (PLEG): container finished" podID="5a79df98-1323-4fd3-a2e6-600935fad91e" containerID="8d00ed415d2c1b6202e841eaa8e09b1052638ab9c2b66bcf7f2558e6ae7bacb5" exitCode=0 Dec 03 12:12:25 crc kubenswrapper[4756]: I1203 12:12:25.777669 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c2sww/must-gather-g8k5l" event={"ID":"5a79df98-1323-4fd3-a2e6-600935fad91e","Type":"ContainerDied","Data":"8d00ed415d2c1b6202e841eaa8e09b1052638ab9c2b66bcf7f2558e6ae7bacb5"} Dec 03 12:12:25 crc kubenswrapper[4756]: I1203 12:12:25.778708 4756 scope.go:117] "RemoveContainer" containerID="8d00ed415d2c1b6202e841eaa8e09b1052638ab9c2b66bcf7f2558e6ae7bacb5" Dec 03 12:12:26 crc kubenswrapper[4756]: I1203 12:12:26.238437 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c2sww_must-gather-g8k5l_5a79df98-1323-4fd3-a2e6-600935fad91e/gather/0.log" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.106994 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w5ttn"] Dec 03 12:12:30 crc kubenswrapper[4756]: E1203 12:12:30.107909 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab75f90-6b24-499d-b614-784268353bd2" containerName="registry-server" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.107922 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab75f90-6b24-499d-b614-784268353bd2" containerName="registry-server" Dec 03 12:12:30 crc kubenswrapper[4756]: E1203 12:12:30.107935 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab75f90-6b24-499d-b614-784268353bd2" containerName="extract-content" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.107941 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab75f90-6b24-499d-b614-784268353bd2" containerName="extract-content" Dec 03 12:12:30 crc kubenswrapper[4756]: E1203 12:12:30.108023 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab75f90-6b24-499d-b614-784268353bd2" containerName="extract-utilities" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.108031 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab75f90-6b24-499d-b614-784268353bd2" containerName="extract-utilities" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.108209 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab75f90-6b24-499d-b614-784268353bd2" containerName="registry-server" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.110481 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.120129 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5ttn"] Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.199802 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrnz7\" (UniqueName: \"kubernetes.io/projected/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-kube-api-access-wrnz7\") pod \"redhat-marketplace-w5ttn\" (UID: \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\") " pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.200104 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-catalog-content\") pod \"redhat-marketplace-w5ttn\" (UID: \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\") " pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.200562 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-utilities\") pod \"redhat-marketplace-w5ttn\" (UID: \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\") " pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.302768 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-utilities\") pod \"redhat-marketplace-w5ttn\" (UID: \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\") " pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.302934 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrnz7\" (UniqueName: \"kubernetes.io/projected/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-kube-api-access-wrnz7\") pod \"redhat-marketplace-w5ttn\" (UID: \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\") " pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.303134 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-catalog-content\") pod \"redhat-marketplace-w5ttn\" (UID: \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\") " pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.303499 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-utilities\") pod \"redhat-marketplace-w5ttn\" (UID: \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\") " pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.303712 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-catalog-content\") pod \"redhat-marketplace-w5ttn\" (UID: \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\") " pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.329029 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrnz7\" (UniqueName: \"kubernetes.io/projected/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-kube-api-access-wrnz7\") pod \"redhat-marketplace-w5ttn\" (UID: \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\") " pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.458977 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:30 crc kubenswrapper[4756]: I1203 12:12:30.978524 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5ttn"] Dec 03 12:12:31 crc kubenswrapper[4756]: I1203 12:12:31.850882 4756 generic.go:334] "Generic (PLEG): container finished" podID="b720d3c4-54b1-47f4-87bf-8ffbaf30545b" containerID="1a5551ef1e75d42bc9dc6bc4c4b0ec22b13b41fd2f6eec8ef0b5035da81677f0" exitCode=0 Dec 03 12:12:31 crc kubenswrapper[4756]: I1203 12:12:31.850923 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5ttn" event={"ID":"b720d3c4-54b1-47f4-87bf-8ffbaf30545b","Type":"ContainerDied","Data":"1a5551ef1e75d42bc9dc6bc4c4b0ec22b13b41fd2f6eec8ef0b5035da81677f0"} Dec 03 12:12:31 crc kubenswrapper[4756]: I1203 12:12:31.850972 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5ttn" event={"ID":"b720d3c4-54b1-47f4-87bf-8ffbaf30545b","Type":"ContainerStarted","Data":"2d0c0c7b338d475bf7a5c0d2eb8c5dbb7a0ee096582f803ee8aae18802931759"} Dec 03 12:12:34 crc kubenswrapper[4756]: I1203 12:12:34.879633 4756 generic.go:334] "Generic (PLEG): container finished" podID="b720d3c4-54b1-47f4-87bf-8ffbaf30545b" containerID="126cedbb683f66d7bc2524c0717509ea0a5266b7346d2617b018fa2e4cde0e18" exitCode=0 Dec 03 12:12:34 crc kubenswrapper[4756]: I1203 12:12:34.879820 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5ttn" event={"ID":"b720d3c4-54b1-47f4-87bf-8ffbaf30545b","Type":"ContainerDied","Data":"126cedbb683f66d7bc2524c0717509ea0a5266b7346d2617b018fa2e4cde0e18"} Dec 03 12:12:35 crc kubenswrapper[4756]: I1203 12:12:35.893344 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5ttn" event={"ID":"b720d3c4-54b1-47f4-87bf-8ffbaf30545b","Type":"ContainerStarted","Data":"30508cb02268cedd8659a58d41a0c058c2b27c7a35154d0948bece3ab1d60149"} Dec 03 12:12:35 crc kubenswrapper[4756]: I1203 12:12:35.921724 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w5ttn" podStartSLOduration=2.493894214 podStartE2EDuration="5.921701529s" podCreationTimestamp="2025-12-03 12:12:30 +0000 UTC" firstStartedPulling="2025-12-03 12:12:31.853788043 +0000 UTC m=+4762.883789287" lastFinishedPulling="2025-12-03 12:12:35.281595348 +0000 UTC m=+4766.311596602" observedRunningTime="2025-12-03 12:12:35.910295409 +0000 UTC m=+4766.940296653" watchObservedRunningTime="2025-12-03 12:12:35.921701529 +0000 UTC m=+4766.951702773" Dec 03 12:12:37 crc kubenswrapper[4756]: I1203 12:12:37.658642 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c2sww/must-gather-g8k5l"] Dec 03 12:12:37 crc kubenswrapper[4756]: I1203 12:12:37.659538 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-c2sww/must-gather-g8k5l" podUID="5a79df98-1323-4fd3-a2e6-600935fad91e" containerName="copy" containerID="cri-o://da30b39c41647297faff0528cbb4830f09fba5051bf29ddb37e88ccf7eb7d062" gracePeriod=2 Dec 03 12:12:37 crc kubenswrapper[4756]: I1203 12:12:37.677851 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c2sww/must-gather-g8k5l"] Dec 03 12:12:37 crc kubenswrapper[4756]: I1203 12:12:37.951368 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c2sww_must-gather-g8k5l_5a79df98-1323-4fd3-a2e6-600935fad91e/copy/0.log" Dec 03 12:12:37 crc kubenswrapper[4756]: I1203 12:12:37.954082 4756 generic.go:334] "Generic (PLEG): container finished" podID="5a79df98-1323-4fd3-a2e6-600935fad91e" containerID="da30b39c41647297faff0528cbb4830f09fba5051bf29ddb37e88ccf7eb7d062" exitCode=143 Dec 03 12:12:38 crc kubenswrapper[4756]: I1203 12:12:38.128746 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c2sww_must-gather-g8k5l_5a79df98-1323-4fd3-a2e6-600935fad91e/copy/0.log" Dec 03 12:12:38 crc kubenswrapper[4756]: I1203 12:12:38.129246 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/must-gather-g8k5l" Dec 03 12:12:38 crc kubenswrapper[4756]: I1203 12:12:38.266073 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fn8f\" (UniqueName: \"kubernetes.io/projected/5a79df98-1323-4fd3-a2e6-600935fad91e-kube-api-access-4fn8f\") pod \"5a79df98-1323-4fd3-a2e6-600935fad91e\" (UID: \"5a79df98-1323-4fd3-a2e6-600935fad91e\") " Dec 03 12:12:38 crc kubenswrapper[4756]: I1203 12:12:38.266189 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a79df98-1323-4fd3-a2e6-600935fad91e-must-gather-output\") pod \"5a79df98-1323-4fd3-a2e6-600935fad91e\" (UID: \"5a79df98-1323-4fd3-a2e6-600935fad91e\") " Dec 03 12:12:38 crc kubenswrapper[4756]: I1203 12:12:38.284893 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a79df98-1323-4fd3-a2e6-600935fad91e-kube-api-access-4fn8f" (OuterVolumeSpecName: "kube-api-access-4fn8f") pod "5a79df98-1323-4fd3-a2e6-600935fad91e" (UID: "5a79df98-1323-4fd3-a2e6-600935fad91e"). InnerVolumeSpecName "kube-api-access-4fn8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:12:38 crc kubenswrapper[4756]: I1203 12:12:38.368880 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fn8f\" (UniqueName: \"kubernetes.io/projected/5a79df98-1323-4fd3-a2e6-600935fad91e-kube-api-access-4fn8f\") on node \"crc\" DevicePath \"\"" Dec 03 12:12:38 crc kubenswrapper[4756]: I1203 12:12:38.434785 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a79df98-1323-4fd3-a2e6-600935fad91e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5a79df98-1323-4fd3-a2e6-600935fad91e" (UID: "5a79df98-1323-4fd3-a2e6-600935fad91e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:12:38 crc kubenswrapper[4756]: I1203 12:12:38.471263 4756 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a79df98-1323-4fd3-a2e6-600935fad91e-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 03 12:12:38 crc kubenswrapper[4756]: I1203 12:12:38.965186 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c2sww_must-gather-g8k5l_5a79df98-1323-4fd3-a2e6-600935fad91e/copy/0.log" Dec 03 12:12:38 crc kubenswrapper[4756]: I1203 12:12:38.966867 4756 scope.go:117] "RemoveContainer" containerID="da30b39c41647297faff0528cbb4830f09fba5051bf29ddb37e88ccf7eb7d062" Dec 03 12:12:38 crc kubenswrapper[4756]: I1203 12:12:38.966937 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c2sww/must-gather-g8k5l" Dec 03 12:12:38 crc kubenswrapper[4756]: I1203 12:12:38.992518 4756 scope.go:117] "RemoveContainer" containerID="8d00ed415d2c1b6202e841eaa8e09b1052638ab9c2b66bcf7f2558e6ae7bacb5" Dec 03 12:12:39 crc kubenswrapper[4756]: I1203 12:12:39.247883 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a79df98-1323-4fd3-a2e6-600935fad91e" path="/var/lib/kubelet/pods/5a79df98-1323-4fd3-a2e6-600935fad91e/volumes" Dec 03 12:12:40 crc kubenswrapper[4756]: I1203 12:12:40.459265 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:40 crc kubenswrapper[4756]: I1203 12:12:40.459584 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:40 crc kubenswrapper[4756]: I1203 12:12:40.512142 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:41 crc kubenswrapper[4756]: I1203 12:12:41.035251 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:41 crc kubenswrapper[4756]: I1203 12:12:41.083584 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5ttn"] Dec 03 12:12:43 crc kubenswrapper[4756]: I1203 12:12:43.003633 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w5ttn" podUID="b720d3c4-54b1-47f4-87bf-8ffbaf30545b" containerName="registry-server" containerID="cri-o://30508cb02268cedd8659a58d41a0c058c2b27c7a35154d0948bece3ab1d60149" gracePeriod=2 Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.006169 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.016668 4756 generic.go:334] "Generic (PLEG): container finished" podID="b720d3c4-54b1-47f4-87bf-8ffbaf30545b" containerID="30508cb02268cedd8659a58d41a0c058c2b27c7a35154d0948bece3ab1d60149" exitCode=0 Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.016748 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5ttn" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.016742 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5ttn" event={"ID":"b720d3c4-54b1-47f4-87bf-8ffbaf30545b","Type":"ContainerDied","Data":"30508cb02268cedd8659a58d41a0c058c2b27c7a35154d0948bece3ab1d60149"} Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.017175 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5ttn" event={"ID":"b720d3c4-54b1-47f4-87bf-8ffbaf30545b","Type":"ContainerDied","Data":"2d0c0c7b338d475bf7a5c0d2eb8c5dbb7a0ee096582f803ee8aae18802931759"} Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.017200 4756 scope.go:117] "RemoveContainer" containerID="30508cb02268cedd8659a58d41a0c058c2b27c7a35154d0948bece3ab1d60149" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.065370 4756 scope.go:117] "RemoveContainer" containerID="126cedbb683f66d7bc2524c0717509ea0a5266b7346d2617b018fa2e4cde0e18" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.083410 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-utilities\") pod \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\" (UID: \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\") " Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.083521 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrnz7\" (UniqueName: \"kubernetes.io/projected/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-kube-api-access-wrnz7\") pod \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\" (UID: \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\") " Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.083677 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-catalog-content\") pod \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\" (UID: \"b720d3c4-54b1-47f4-87bf-8ffbaf30545b\") " Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.084812 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-utilities" (OuterVolumeSpecName: "utilities") pod "b720d3c4-54b1-47f4-87bf-8ffbaf30545b" (UID: "b720d3c4-54b1-47f4-87bf-8ffbaf30545b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.090776 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-kube-api-access-wrnz7" (OuterVolumeSpecName: "kube-api-access-wrnz7") pod "b720d3c4-54b1-47f4-87bf-8ffbaf30545b" (UID: "b720d3c4-54b1-47f4-87bf-8ffbaf30545b"). InnerVolumeSpecName "kube-api-access-wrnz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.090972 4756 scope.go:117] "RemoveContainer" containerID="1a5551ef1e75d42bc9dc6bc4c4b0ec22b13b41fd2f6eec8ef0b5035da81677f0" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.109716 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b720d3c4-54b1-47f4-87bf-8ffbaf30545b" (UID: "b720d3c4-54b1-47f4-87bf-8ffbaf30545b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.185261 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.185286 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.185296 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrnz7\" (UniqueName: \"kubernetes.io/projected/b720d3c4-54b1-47f4-87bf-8ffbaf30545b-kube-api-access-wrnz7\") on node \"crc\" DevicePath \"\"" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.185478 4756 scope.go:117] "RemoveContainer" containerID="30508cb02268cedd8659a58d41a0c058c2b27c7a35154d0948bece3ab1d60149" Dec 03 12:12:44 crc kubenswrapper[4756]: E1203 12:12:44.185901 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30508cb02268cedd8659a58d41a0c058c2b27c7a35154d0948bece3ab1d60149\": container with ID starting with 30508cb02268cedd8659a58d41a0c058c2b27c7a35154d0948bece3ab1d60149 not found: ID does not exist" containerID="30508cb02268cedd8659a58d41a0c058c2b27c7a35154d0948bece3ab1d60149" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.185936 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30508cb02268cedd8659a58d41a0c058c2b27c7a35154d0948bece3ab1d60149"} err="failed to get container status \"30508cb02268cedd8659a58d41a0c058c2b27c7a35154d0948bece3ab1d60149\": rpc error: code = NotFound desc = could not find container \"30508cb02268cedd8659a58d41a0c058c2b27c7a35154d0948bece3ab1d60149\": container with ID starting with 30508cb02268cedd8659a58d41a0c058c2b27c7a35154d0948bece3ab1d60149 not found: ID does not exist" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.185968 4756 scope.go:117] "RemoveContainer" containerID="126cedbb683f66d7bc2524c0717509ea0a5266b7346d2617b018fa2e4cde0e18" Dec 03 12:12:44 crc kubenswrapper[4756]: E1203 12:12:44.186285 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"126cedbb683f66d7bc2524c0717509ea0a5266b7346d2617b018fa2e4cde0e18\": container with ID starting with 126cedbb683f66d7bc2524c0717509ea0a5266b7346d2617b018fa2e4cde0e18 not found: ID does not exist" containerID="126cedbb683f66d7bc2524c0717509ea0a5266b7346d2617b018fa2e4cde0e18" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.186308 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126cedbb683f66d7bc2524c0717509ea0a5266b7346d2617b018fa2e4cde0e18"} err="failed to get container status \"126cedbb683f66d7bc2524c0717509ea0a5266b7346d2617b018fa2e4cde0e18\": rpc error: code = NotFound desc = could not find container \"126cedbb683f66d7bc2524c0717509ea0a5266b7346d2617b018fa2e4cde0e18\": container with ID starting with 126cedbb683f66d7bc2524c0717509ea0a5266b7346d2617b018fa2e4cde0e18 not found: ID does not exist" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.186322 4756 scope.go:117] "RemoveContainer" containerID="1a5551ef1e75d42bc9dc6bc4c4b0ec22b13b41fd2f6eec8ef0b5035da81677f0" Dec 03 12:12:44 crc kubenswrapper[4756]: E1203 12:12:44.186669 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a5551ef1e75d42bc9dc6bc4c4b0ec22b13b41fd2f6eec8ef0b5035da81677f0\": container with ID starting with 1a5551ef1e75d42bc9dc6bc4c4b0ec22b13b41fd2f6eec8ef0b5035da81677f0 not found: ID does not exist" containerID="1a5551ef1e75d42bc9dc6bc4c4b0ec22b13b41fd2f6eec8ef0b5035da81677f0" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.186692 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a5551ef1e75d42bc9dc6bc4c4b0ec22b13b41fd2f6eec8ef0b5035da81677f0"} err="failed to get container status \"1a5551ef1e75d42bc9dc6bc4c4b0ec22b13b41fd2f6eec8ef0b5035da81677f0\": rpc error: code = NotFound desc = could not find container \"1a5551ef1e75d42bc9dc6bc4c4b0ec22b13b41fd2f6eec8ef0b5035da81677f0\": container with ID starting with 1a5551ef1e75d42bc9dc6bc4c4b0ec22b13b41fd2f6eec8ef0b5035da81677f0 not found: ID does not exist" Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.351359 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5ttn"] Dec 03 12:12:44 crc kubenswrapper[4756]: I1203 12:12:44.361090 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5ttn"] Dec 03 12:12:45 crc kubenswrapper[4756]: I1203 12:12:45.246094 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b720d3c4-54b1-47f4-87bf-8ffbaf30545b" path="/var/lib/kubelet/pods/b720d3c4-54b1-47f4-87bf-8ffbaf30545b/volumes" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.067466 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jt6nz"] Dec 03 12:12:54 crc kubenswrapper[4756]: E1203 12:12:54.068381 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b720d3c4-54b1-47f4-87bf-8ffbaf30545b" containerName="extract-content" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.068394 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b720d3c4-54b1-47f4-87bf-8ffbaf30545b" containerName="extract-content" Dec 03 12:12:54 crc kubenswrapper[4756]: E1203 12:12:54.068405 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a79df98-1323-4fd3-a2e6-600935fad91e" containerName="gather" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.068411 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a79df98-1323-4fd3-a2e6-600935fad91e" containerName="gather" Dec 03 12:12:54 crc kubenswrapper[4756]: E1203 12:12:54.068428 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b720d3c4-54b1-47f4-87bf-8ffbaf30545b" containerName="registry-server" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.068435 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b720d3c4-54b1-47f4-87bf-8ffbaf30545b" containerName="registry-server" Dec 03 12:12:54 crc kubenswrapper[4756]: E1203 12:12:54.068452 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a79df98-1323-4fd3-a2e6-600935fad91e" containerName="copy" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.068458 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a79df98-1323-4fd3-a2e6-600935fad91e" containerName="copy" Dec 03 12:12:54 crc kubenswrapper[4756]: E1203 12:12:54.068471 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b720d3c4-54b1-47f4-87bf-8ffbaf30545b" containerName="extract-utilities" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.068477 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b720d3c4-54b1-47f4-87bf-8ffbaf30545b" containerName="extract-utilities" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.068660 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a79df98-1323-4fd3-a2e6-600935fad91e" containerName="gather" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.068676 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b720d3c4-54b1-47f4-87bf-8ffbaf30545b" containerName="registry-server" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.068687 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a79df98-1323-4fd3-a2e6-600935fad91e" containerName="copy" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.070332 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.102008 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jt6nz"] Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.173381 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9jk\" (UniqueName: \"kubernetes.io/projected/980ebbb7-73b2-4940-ae2c-fcab161c745a-kube-api-access-xg9jk\") pod \"community-operators-jt6nz\" (UID: \"980ebbb7-73b2-4940-ae2c-fcab161c745a\") " pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.173719 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980ebbb7-73b2-4940-ae2c-fcab161c745a-utilities\") pod \"community-operators-jt6nz\" (UID: \"980ebbb7-73b2-4940-ae2c-fcab161c745a\") " pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.173898 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980ebbb7-73b2-4940-ae2c-fcab161c745a-catalog-content\") pod \"community-operators-jt6nz\" (UID: \"980ebbb7-73b2-4940-ae2c-fcab161c745a\") " pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.275488 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980ebbb7-73b2-4940-ae2c-fcab161c745a-utilities\") pod \"community-operators-jt6nz\" (UID: \"980ebbb7-73b2-4940-ae2c-fcab161c745a\") " pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.275652 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980ebbb7-73b2-4940-ae2c-fcab161c745a-catalog-content\") pod \"community-operators-jt6nz\" (UID: \"980ebbb7-73b2-4940-ae2c-fcab161c745a\") " pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.275712 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg9jk\" (UniqueName: \"kubernetes.io/projected/980ebbb7-73b2-4940-ae2c-fcab161c745a-kube-api-access-xg9jk\") pod \"community-operators-jt6nz\" (UID: \"980ebbb7-73b2-4940-ae2c-fcab161c745a\") " pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.276077 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980ebbb7-73b2-4940-ae2c-fcab161c745a-utilities\") pod \"community-operators-jt6nz\" (UID: \"980ebbb7-73b2-4940-ae2c-fcab161c745a\") " pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.276138 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980ebbb7-73b2-4940-ae2c-fcab161c745a-catalog-content\") pod \"community-operators-jt6nz\" (UID: \"980ebbb7-73b2-4940-ae2c-fcab161c745a\") " pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.295686 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg9jk\" (UniqueName: \"kubernetes.io/projected/980ebbb7-73b2-4940-ae2c-fcab161c745a-kube-api-access-xg9jk\") pod \"community-operators-jt6nz\" (UID: \"980ebbb7-73b2-4940-ae2c-fcab161c745a\") " pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.392557 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:12:54 crc kubenswrapper[4756]: I1203 12:12:54.948589 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jt6nz"] Dec 03 12:12:56 crc kubenswrapper[4756]: I1203 12:12:56.150669 4756 generic.go:334] "Generic (PLEG): container finished" podID="980ebbb7-73b2-4940-ae2c-fcab161c745a" containerID="f9a3b6ac49988892c11740169b9e064e6fbe1758050718e8c36b1c43982a5577" exitCode=0 Dec 03 12:12:56 crc kubenswrapper[4756]: I1203 12:12:56.150715 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt6nz" event={"ID":"980ebbb7-73b2-4940-ae2c-fcab161c745a","Type":"ContainerDied","Data":"f9a3b6ac49988892c11740169b9e064e6fbe1758050718e8c36b1c43982a5577"} Dec 03 12:12:56 crc kubenswrapper[4756]: I1203 12:12:56.151177 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt6nz" event={"ID":"980ebbb7-73b2-4940-ae2c-fcab161c745a","Type":"ContainerStarted","Data":"0f1a80f4015a6075d127bbbb37c825696d7d67ad409099c12070212cf19f4176"} Dec 03 12:12:58 crc kubenswrapper[4756]: I1203 12:12:58.172307 4756 generic.go:334] "Generic (PLEG): container finished" podID="980ebbb7-73b2-4940-ae2c-fcab161c745a" containerID="1ad617c099b743630e22ba262124e0d542076ce8db0879061a92ce3431a138d8" exitCode=0 Dec 03 12:12:58 crc kubenswrapper[4756]: I1203 12:12:58.172492 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt6nz" event={"ID":"980ebbb7-73b2-4940-ae2c-fcab161c745a","Type":"ContainerDied","Data":"1ad617c099b743630e22ba262124e0d542076ce8db0879061a92ce3431a138d8"} Dec 03 12:12:59 crc kubenswrapper[4756]: I1203 12:12:59.186612 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt6nz" event={"ID":"980ebbb7-73b2-4940-ae2c-fcab161c745a","Type":"ContainerStarted","Data":"83a4852b120a21beb95de28e409da8d75e8bc024d5d15c70f220b4ae21f9c861"} Dec 03 12:12:59 crc kubenswrapper[4756]: I1203 12:12:59.211744 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jt6nz" podStartSLOduration=2.622286614 podStartE2EDuration="5.211702335s" podCreationTimestamp="2025-12-03 12:12:54 +0000 UTC" firstStartedPulling="2025-12-03 12:12:56.15366088 +0000 UTC m=+4787.183662124" lastFinishedPulling="2025-12-03 12:12:58.743076601 +0000 UTC m=+4789.773077845" observedRunningTime="2025-12-03 12:12:59.207251905 +0000 UTC m=+4790.237253179" watchObservedRunningTime="2025-12-03 12:12:59.211702335 +0000 UTC m=+4790.241703579" Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.393088 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.393604 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.407225 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9nxwm"] Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.417018 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.437937 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nxwm"] Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.475355 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6jpv\" (UniqueName: \"kubernetes.io/projected/26739755-acf7-4c74-a9e5-b45a99125b18-kube-api-access-q6jpv\") pod \"certified-operators-9nxwm\" (UID: \"26739755-acf7-4c74-a9e5-b45a99125b18\") " pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.475479 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26739755-acf7-4c74-a9e5-b45a99125b18-catalog-content\") pod \"certified-operators-9nxwm\" (UID: \"26739755-acf7-4c74-a9e5-b45a99125b18\") " pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.475509 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26739755-acf7-4c74-a9e5-b45a99125b18-utilities\") pod \"certified-operators-9nxwm\" (UID: \"26739755-acf7-4c74-a9e5-b45a99125b18\") " pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.497255 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.577076 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26739755-acf7-4c74-a9e5-b45a99125b18-catalog-content\") pod \"certified-operators-9nxwm\" (UID: \"26739755-acf7-4c74-a9e5-b45a99125b18\") " pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.577128 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26739755-acf7-4c74-a9e5-b45a99125b18-utilities\") pod \"certified-operators-9nxwm\" (UID: \"26739755-acf7-4c74-a9e5-b45a99125b18\") " pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.577262 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6jpv\" (UniqueName: \"kubernetes.io/projected/26739755-acf7-4c74-a9e5-b45a99125b18-kube-api-access-q6jpv\") pod \"certified-operators-9nxwm\" (UID: \"26739755-acf7-4c74-a9e5-b45a99125b18\") " pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.578009 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26739755-acf7-4c74-a9e5-b45a99125b18-catalog-content\") pod \"certified-operators-9nxwm\" (UID: \"26739755-acf7-4c74-a9e5-b45a99125b18\") " pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.578224 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26739755-acf7-4c74-a9e5-b45a99125b18-utilities\") pod \"certified-operators-9nxwm\" (UID: \"26739755-acf7-4c74-a9e5-b45a99125b18\") " pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:04 crc kubenswrapper[4756]: I1203 12:13:04.998873 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6jpv\" (UniqueName: \"kubernetes.io/projected/26739755-acf7-4c74-a9e5-b45a99125b18-kube-api-access-q6jpv\") pod \"certified-operators-9nxwm\" (UID: \"26739755-acf7-4c74-a9e5-b45a99125b18\") " pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:05 crc kubenswrapper[4756]: I1203 12:13:05.069673 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:05 crc kubenswrapper[4756]: I1203 12:13:05.342031 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:13:05 crc kubenswrapper[4756]: I1203 12:13:05.653317 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nxwm"] Dec 03 12:13:06 crc kubenswrapper[4756]: I1203 12:13:06.268508 4756 generic.go:334] "Generic (PLEG): container finished" podID="26739755-acf7-4c74-a9e5-b45a99125b18" containerID="84627c4736e8b236598b804895217fe4522a8a3256dfa4510bcb2280c2302f3a" exitCode=0 Dec 03 12:13:06 crc kubenswrapper[4756]: I1203 12:13:06.268615 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nxwm" event={"ID":"26739755-acf7-4c74-a9e5-b45a99125b18","Type":"ContainerDied","Data":"84627c4736e8b236598b804895217fe4522a8a3256dfa4510bcb2280c2302f3a"} Dec 03 12:13:06 crc kubenswrapper[4756]: I1203 12:13:06.269034 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nxwm" event={"ID":"26739755-acf7-4c74-a9e5-b45a99125b18","Type":"ContainerStarted","Data":"eefc3b518bbc422aa50cb1fdd605623b27b3a29d2f6c66b42bc3585ea019820b"} Dec 03 12:13:06 crc kubenswrapper[4756]: I1203 12:13:06.778204 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jt6nz"] Dec 03 12:13:07 crc kubenswrapper[4756]: I1203 12:13:07.275837 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jt6nz" podUID="980ebbb7-73b2-4940-ae2c-fcab161c745a" containerName="registry-server" containerID="cri-o://83a4852b120a21beb95de28e409da8d75e8bc024d5d15c70f220b4ae21f9c861" gracePeriod=2 Dec 03 12:13:07 crc kubenswrapper[4756]: I1203 12:13:07.757085 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:13:07 crc kubenswrapper[4756]: I1203 12:13:07.840604 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980ebbb7-73b2-4940-ae2c-fcab161c745a-utilities\") pod \"980ebbb7-73b2-4940-ae2c-fcab161c745a\" (UID: \"980ebbb7-73b2-4940-ae2c-fcab161c745a\") " Dec 03 12:13:07 crc kubenswrapper[4756]: I1203 12:13:07.840655 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg9jk\" (UniqueName: \"kubernetes.io/projected/980ebbb7-73b2-4940-ae2c-fcab161c745a-kube-api-access-xg9jk\") pod \"980ebbb7-73b2-4940-ae2c-fcab161c745a\" (UID: \"980ebbb7-73b2-4940-ae2c-fcab161c745a\") " Dec 03 12:13:07 crc kubenswrapper[4756]: I1203 12:13:07.841044 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980ebbb7-73b2-4940-ae2c-fcab161c745a-catalog-content\") pod \"980ebbb7-73b2-4940-ae2c-fcab161c745a\" (UID: \"980ebbb7-73b2-4940-ae2c-fcab161c745a\") " Dec 03 12:13:07 crc kubenswrapper[4756]: I1203 12:13:07.842344 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980ebbb7-73b2-4940-ae2c-fcab161c745a-utilities" (OuterVolumeSpecName: "utilities") pod "980ebbb7-73b2-4940-ae2c-fcab161c745a" (UID: "980ebbb7-73b2-4940-ae2c-fcab161c745a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:13:07 crc kubenswrapper[4756]: I1203 12:13:07.851109 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/980ebbb7-73b2-4940-ae2c-fcab161c745a-kube-api-access-xg9jk" (OuterVolumeSpecName: "kube-api-access-xg9jk") pod "980ebbb7-73b2-4940-ae2c-fcab161c745a" (UID: "980ebbb7-73b2-4940-ae2c-fcab161c745a"). InnerVolumeSpecName "kube-api-access-xg9jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:13:07 crc kubenswrapper[4756]: I1203 12:13:07.905220 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980ebbb7-73b2-4940-ae2c-fcab161c745a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "980ebbb7-73b2-4940-ae2c-fcab161c745a" (UID: "980ebbb7-73b2-4940-ae2c-fcab161c745a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:13:07 crc kubenswrapper[4756]: I1203 12:13:07.944099 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/980ebbb7-73b2-4940-ae2c-fcab161c745a-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 12:13:07 crc kubenswrapper[4756]: I1203 12:13:07.944157 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg9jk\" (UniqueName: \"kubernetes.io/projected/980ebbb7-73b2-4940-ae2c-fcab161c745a-kube-api-access-xg9jk\") on node \"crc\" DevicePath \"\"" Dec 03 12:13:07 crc kubenswrapper[4756]: I1203 12:13:07.944178 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/980ebbb7-73b2-4940-ae2c-fcab161c745a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.298819 4756 generic.go:334] "Generic (PLEG): container finished" podID="980ebbb7-73b2-4940-ae2c-fcab161c745a" containerID="83a4852b120a21beb95de28e409da8d75e8bc024d5d15c70f220b4ae21f9c861" exitCode=0 Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.298985 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jt6nz" Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.299021 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt6nz" event={"ID":"980ebbb7-73b2-4940-ae2c-fcab161c745a","Type":"ContainerDied","Data":"83a4852b120a21beb95de28e409da8d75e8bc024d5d15c70f220b4ae21f9c861"} Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.300042 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt6nz" event={"ID":"980ebbb7-73b2-4940-ae2c-fcab161c745a","Type":"ContainerDied","Data":"0f1a80f4015a6075d127bbbb37c825696d7d67ad409099c12070212cf19f4176"} Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.300080 4756 scope.go:117] "RemoveContainer" containerID="83a4852b120a21beb95de28e409da8d75e8bc024d5d15c70f220b4ae21f9c861" Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.321946 4756 scope.go:117] "RemoveContainer" containerID="1ad617c099b743630e22ba262124e0d542076ce8db0879061a92ce3431a138d8" Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.339600 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jt6nz"] Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.357534 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jt6nz"] Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.373479 4756 scope.go:117] "RemoveContainer" containerID="f9a3b6ac49988892c11740169b9e064e6fbe1758050718e8c36b1c43982a5577" Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.397750 4756 scope.go:117] "RemoveContainer" containerID="83a4852b120a21beb95de28e409da8d75e8bc024d5d15c70f220b4ae21f9c861" Dec 03 12:13:08 crc kubenswrapper[4756]: E1203 12:13:08.398294 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83a4852b120a21beb95de28e409da8d75e8bc024d5d15c70f220b4ae21f9c861\": container with ID starting with 83a4852b120a21beb95de28e409da8d75e8bc024d5d15c70f220b4ae21f9c861 not found: ID does not exist" containerID="83a4852b120a21beb95de28e409da8d75e8bc024d5d15c70f220b4ae21f9c861" Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.398374 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a4852b120a21beb95de28e409da8d75e8bc024d5d15c70f220b4ae21f9c861"} err="failed to get container status \"83a4852b120a21beb95de28e409da8d75e8bc024d5d15c70f220b4ae21f9c861\": rpc error: code = NotFound desc = could not find container \"83a4852b120a21beb95de28e409da8d75e8bc024d5d15c70f220b4ae21f9c861\": container with ID starting with 83a4852b120a21beb95de28e409da8d75e8bc024d5d15c70f220b4ae21f9c861 not found: ID does not exist" Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.398415 4756 scope.go:117] "RemoveContainer" containerID="1ad617c099b743630e22ba262124e0d542076ce8db0879061a92ce3431a138d8" Dec 03 12:13:08 crc kubenswrapper[4756]: E1203 12:13:08.398776 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ad617c099b743630e22ba262124e0d542076ce8db0879061a92ce3431a138d8\": container with ID starting with 1ad617c099b743630e22ba262124e0d542076ce8db0879061a92ce3431a138d8 not found: ID does not exist" containerID="1ad617c099b743630e22ba262124e0d542076ce8db0879061a92ce3431a138d8" Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.398825 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ad617c099b743630e22ba262124e0d542076ce8db0879061a92ce3431a138d8"} err="failed to get container status \"1ad617c099b743630e22ba262124e0d542076ce8db0879061a92ce3431a138d8\": rpc error: code = NotFound desc = could not find container \"1ad617c099b743630e22ba262124e0d542076ce8db0879061a92ce3431a138d8\": container with ID starting with 1ad617c099b743630e22ba262124e0d542076ce8db0879061a92ce3431a138d8 not found: ID does not exist" Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.398860 4756 scope.go:117] "RemoveContainer" containerID="f9a3b6ac49988892c11740169b9e064e6fbe1758050718e8c36b1c43982a5577" Dec 03 12:13:08 crc kubenswrapper[4756]: E1203 12:13:08.399158 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9a3b6ac49988892c11740169b9e064e6fbe1758050718e8c36b1c43982a5577\": container with ID starting with f9a3b6ac49988892c11740169b9e064e6fbe1758050718e8c36b1c43982a5577 not found: ID does not exist" containerID="f9a3b6ac49988892c11740169b9e064e6fbe1758050718e8c36b1c43982a5577" Dec 03 12:13:08 crc kubenswrapper[4756]: I1203 12:13:08.399248 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9a3b6ac49988892c11740169b9e064e6fbe1758050718e8c36b1c43982a5577"} err="failed to get container status \"f9a3b6ac49988892c11740169b9e064e6fbe1758050718e8c36b1c43982a5577\": rpc error: code = NotFound desc = could not find container \"f9a3b6ac49988892c11740169b9e064e6fbe1758050718e8c36b1c43982a5577\": container with ID starting with f9a3b6ac49988892c11740169b9e064e6fbe1758050718e8c36b1c43982a5577 not found: ID does not exist" Dec 03 12:13:09 crc kubenswrapper[4756]: I1203 12:13:09.245273 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="980ebbb7-73b2-4940-ae2c-fcab161c745a" path="/var/lib/kubelet/pods/980ebbb7-73b2-4940-ae2c-fcab161c745a/volumes" Dec 03 12:13:09 crc kubenswrapper[4756]: I1203 12:13:09.310142 4756 generic.go:334] "Generic (PLEG): container finished" podID="26739755-acf7-4c74-a9e5-b45a99125b18" containerID="ff2fd7c66f4d7eb7702a8e75b6e47b28fd0bcf71cf6147be050dde2b468b739c" exitCode=0 Dec 03 12:13:09 crc kubenswrapper[4756]: I1203 12:13:09.310205 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nxwm" event={"ID":"26739755-acf7-4c74-a9e5-b45a99125b18","Type":"ContainerDied","Data":"ff2fd7c66f4d7eb7702a8e75b6e47b28fd0bcf71cf6147be050dde2b468b739c"} Dec 03 12:13:10 crc kubenswrapper[4756]: I1203 12:13:10.322526 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nxwm" event={"ID":"26739755-acf7-4c74-a9e5-b45a99125b18","Type":"ContainerStarted","Data":"03ac9ed60607bfeb0570629191d951ae4d14670db9ba69cf68c27bd3e455e6fa"} Dec 03 12:13:10 crc kubenswrapper[4756]: I1203 12:13:10.341437 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9nxwm" podStartSLOduration=2.554213986 podStartE2EDuration="6.341413054s" podCreationTimestamp="2025-12-03 12:13:04 +0000 UTC" firstStartedPulling="2025-12-03 12:13:06.270650239 +0000 UTC m=+4797.300651483" lastFinishedPulling="2025-12-03 12:13:10.057849307 +0000 UTC m=+4801.087850551" observedRunningTime="2025-12-03 12:13:10.339038409 +0000 UTC m=+4801.369039663" watchObservedRunningTime="2025-12-03 12:13:10.341413054 +0000 UTC m=+4801.371414298" Dec 03 12:13:12 crc kubenswrapper[4756]: I1203 12:13:12.840143 4756 scope.go:117] "RemoveContainer" containerID="5f393e77113e47a16752b3f36e38c37b8a91a93dcd8ef8503c1a57a1b6171ed5" Dec 03 12:13:15 crc kubenswrapper[4756]: I1203 12:13:15.071500 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:15 crc kubenswrapper[4756]: I1203 12:13:15.072112 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:15 crc kubenswrapper[4756]: I1203 12:13:15.133648 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:15 crc kubenswrapper[4756]: I1203 12:13:15.413041 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:15 crc kubenswrapper[4756]: I1203 12:13:15.467312 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nxwm"] Dec 03 12:13:17 crc kubenswrapper[4756]: I1203 12:13:17.385768 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9nxwm" podUID="26739755-acf7-4c74-a9e5-b45a99125b18" containerName="registry-server" containerID="cri-o://03ac9ed60607bfeb0570629191d951ae4d14670db9ba69cf68c27bd3e455e6fa" gracePeriod=2 Dec 03 12:13:17 crc kubenswrapper[4756]: I1203 12:13:17.858891 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.037982 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26739755-acf7-4c74-a9e5-b45a99125b18-utilities\") pod \"26739755-acf7-4c74-a9e5-b45a99125b18\" (UID: \"26739755-acf7-4c74-a9e5-b45a99125b18\") " Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.038027 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26739755-acf7-4c74-a9e5-b45a99125b18-catalog-content\") pod \"26739755-acf7-4c74-a9e5-b45a99125b18\" (UID: \"26739755-acf7-4c74-a9e5-b45a99125b18\") " Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.038088 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6jpv\" (UniqueName: \"kubernetes.io/projected/26739755-acf7-4c74-a9e5-b45a99125b18-kube-api-access-q6jpv\") pod \"26739755-acf7-4c74-a9e5-b45a99125b18\" (UID: \"26739755-acf7-4c74-a9e5-b45a99125b18\") " Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.039386 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26739755-acf7-4c74-a9e5-b45a99125b18-utilities" (OuterVolumeSpecName: "utilities") pod "26739755-acf7-4c74-a9e5-b45a99125b18" (UID: "26739755-acf7-4c74-a9e5-b45a99125b18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.043379 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26739755-acf7-4c74-a9e5-b45a99125b18-kube-api-access-q6jpv" (OuterVolumeSpecName: "kube-api-access-q6jpv") pod "26739755-acf7-4c74-a9e5-b45a99125b18" (UID: "26739755-acf7-4c74-a9e5-b45a99125b18"). InnerVolumeSpecName "kube-api-access-q6jpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.119851 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26739755-acf7-4c74-a9e5-b45a99125b18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26739755-acf7-4c74-a9e5-b45a99125b18" (UID: "26739755-acf7-4c74-a9e5-b45a99125b18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.145409 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26739755-acf7-4c74-a9e5-b45a99125b18-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.145514 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26739755-acf7-4c74-a9e5-b45a99125b18-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.145894 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6jpv\" (UniqueName: \"kubernetes.io/projected/26739755-acf7-4c74-a9e5-b45a99125b18-kube-api-access-q6jpv\") on node \"crc\" DevicePath \"\"" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.397384 4756 generic.go:334] "Generic (PLEG): container finished" podID="26739755-acf7-4c74-a9e5-b45a99125b18" containerID="03ac9ed60607bfeb0570629191d951ae4d14670db9ba69cf68c27bd3e455e6fa" exitCode=0 Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.397483 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nxwm" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.397482 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nxwm" event={"ID":"26739755-acf7-4c74-a9e5-b45a99125b18","Type":"ContainerDied","Data":"03ac9ed60607bfeb0570629191d951ae4d14670db9ba69cf68c27bd3e455e6fa"} Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.397943 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nxwm" event={"ID":"26739755-acf7-4c74-a9e5-b45a99125b18","Type":"ContainerDied","Data":"eefc3b518bbc422aa50cb1fdd605623b27b3a29d2f6c66b42bc3585ea019820b"} Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.397983 4756 scope.go:117] "RemoveContainer" containerID="03ac9ed60607bfeb0570629191d951ae4d14670db9ba69cf68c27bd3e455e6fa" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.425826 4756 scope.go:117] "RemoveContainer" containerID="ff2fd7c66f4d7eb7702a8e75b6e47b28fd0bcf71cf6147be050dde2b468b739c" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.443586 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nxwm"] Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.454077 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9nxwm"] Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.465309 4756 scope.go:117] "RemoveContainer" containerID="84627c4736e8b236598b804895217fe4522a8a3256dfa4510bcb2280c2302f3a" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.513156 4756 scope.go:117] "RemoveContainer" containerID="03ac9ed60607bfeb0570629191d951ae4d14670db9ba69cf68c27bd3e455e6fa" Dec 03 12:13:18 crc kubenswrapper[4756]: E1203 12:13:18.513729 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ac9ed60607bfeb0570629191d951ae4d14670db9ba69cf68c27bd3e455e6fa\": container with ID starting with 03ac9ed60607bfeb0570629191d951ae4d14670db9ba69cf68c27bd3e455e6fa not found: ID does not exist" containerID="03ac9ed60607bfeb0570629191d951ae4d14670db9ba69cf68c27bd3e455e6fa" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.513797 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ac9ed60607bfeb0570629191d951ae4d14670db9ba69cf68c27bd3e455e6fa"} err="failed to get container status \"03ac9ed60607bfeb0570629191d951ae4d14670db9ba69cf68c27bd3e455e6fa\": rpc error: code = NotFound desc = could not find container \"03ac9ed60607bfeb0570629191d951ae4d14670db9ba69cf68c27bd3e455e6fa\": container with ID starting with 03ac9ed60607bfeb0570629191d951ae4d14670db9ba69cf68c27bd3e455e6fa not found: ID does not exist" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.513827 4756 scope.go:117] "RemoveContainer" containerID="ff2fd7c66f4d7eb7702a8e75b6e47b28fd0bcf71cf6147be050dde2b468b739c" Dec 03 12:13:18 crc kubenswrapper[4756]: E1203 12:13:18.514250 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff2fd7c66f4d7eb7702a8e75b6e47b28fd0bcf71cf6147be050dde2b468b739c\": container with ID starting with ff2fd7c66f4d7eb7702a8e75b6e47b28fd0bcf71cf6147be050dde2b468b739c not found: ID does not exist" containerID="ff2fd7c66f4d7eb7702a8e75b6e47b28fd0bcf71cf6147be050dde2b468b739c" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.514292 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2fd7c66f4d7eb7702a8e75b6e47b28fd0bcf71cf6147be050dde2b468b739c"} err="failed to get container status \"ff2fd7c66f4d7eb7702a8e75b6e47b28fd0bcf71cf6147be050dde2b468b739c\": rpc error: code = NotFound desc = could not find container \"ff2fd7c66f4d7eb7702a8e75b6e47b28fd0bcf71cf6147be050dde2b468b739c\": container with ID starting with ff2fd7c66f4d7eb7702a8e75b6e47b28fd0bcf71cf6147be050dde2b468b739c not found: ID does not exist" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.514324 4756 scope.go:117] "RemoveContainer" containerID="84627c4736e8b236598b804895217fe4522a8a3256dfa4510bcb2280c2302f3a" Dec 03 12:13:18 crc kubenswrapper[4756]: E1203 12:13:18.514699 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84627c4736e8b236598b804895217fe4522a8a3256dfa4510bcb2280c2302f3a\": container with ID starting with 84627c4736e8b236598b804895217fe4522a8a3256dfa4510bcb2280c2302f3a not found: ID does not exist" containerID="84627c4736e8b236598b804895217fe4522a8a3256dfa4510bcb2280c2302f3a" Dec 03 12:13:18 crc kubenswrapper[4756]: I1203 12:13:18.514732 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84627c4736e8b236598b804895217fe4522a8a3256dfa4510bcb2280c2302f3a"} err="failed to get container status \"84627c4736e8b236598b804895217fe4522a8a3256dfa4510bcb2280c2302f3a\": rpc error: code = NotFound desc = could not find container \"84627c4736e8b236598b804895217fe4522a8a3256dfa4510bcb2280c2302f3a\": container with ID starting with 84627c4736e8b236598b804895217fe4522a8a3256dfa4510bcb2280c2302f3a not found: ID does not exist" Dec 03 12:13:19 crc kubenswrapper[4756]: I1203 12:13:19.246282 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26739755-acf7-4c74-a9e5-b45a99125b18" path="/var/lib/kubelet/pods/26739755-acf7-4c74-a9e5-b45a99125b18/volumes" Dec 03 12:14:22 crc kubenswrapper[4756]: I1203 12:14:22.606791 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 12:14:22 crc kubenswrapper[4756]: I1203 12:14:22.608050 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 12:14:52 crc kubenswrapper[4756]: I1203 12:14:52.606940 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 12:14:52 crc kubenswrapper[4756]: I1203 12:14:52.607667 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.157779 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh"] Dec 03 12:15:00 crc kubenswrapper[4756]: E1203 12:15:00.158647 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26739755-acf7-4c74-a9e5-b45a99125b18" containerName="extract-content" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.158660 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="26739755-acf7-4c74-a9e5-b45a99125b18" containerName="extract-content" Dec 03 12:15:00 crc kubenswrapper[4756]: E1203 12:15:00.158676 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980ebbb7-73b2-4940-ae2c-fcab161c745a" containerName="extract-content" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.158685 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="980ebbb7-73b2-4940-ae2c-fcab161c745a" containerName="extract-content" Dec 03 12:15:00 crc kubenswrapper[4756]: E1203 12:15:00.158704 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980ebbb7-73b2-4940-ae2c-fcab161c745a" containerName="registry-server" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.158710 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="980ebbb7-73b2-4940-ae2c-fcab161c745a" containerName="registry-server" Dec 03 12:15:00 crc kubenswrapper[4756]: E1203 12:15:00.158728 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26739755-acf7-4c74-a9e5-b45a99125b18" containerName="registry-server" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.158735 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="26739755-acf7-4c74-a9e5-b45a99125b18" containerName="registry-server" Dec 03 12:15:00 crc kubenswrapper[4756]: E1203 12:15:00.158755 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980ebbb7-73b2-4940-ae2c-fcab161c745a" containerName="extract-utilities" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.158763 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="980ebbb7-73b2-4940-ae2c-fcab161c745a" containerName="extract-utilities" Dec 03 12:15:00 crc kubenswrapper[4756]: E1203 12:15:00.158777 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26739755-acf7-4c74-a9e5-b45a99125b18" containerName="extract-utilities" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.158784 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="26739755-acf7-4c74-a9e5-b45a99125b18" containerName="extract-utilities" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.158981 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="980ebbb7-73b2-4940-ae2c-fcab161c745a" containerName="registry-server" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.158995 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="26739755-acf7-4c74-a9e5-b45a99125b18" containerName="registry-server" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.159640 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.162040 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.163002 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.170909 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh"] Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.347339 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c70564a-daef-4a04-82c5-b6efb98deaf7-secret-volume\") pod \"collect-profiles-29412735-98vzh\" (UID: \"1c70564a-daef-4a04-82c5-b6efb98deaf7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.347414 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8llp\" (UniqueName: \"kubernetes.io/projected/1c70564a-daef-4a04-82c5-b6efb98deaf7-kube-api-access-d8llp\") pod \"collect-profiles-29412735-98vzh\" (UID: \"1c70564a-daef-4a04-82c5-b6efb98deaf7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.347612 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c70564a-daef-4a04-82c5-b6efb98deaf7-config-volume\") pod \"collect-profiles-29412735-98vzh\" (UID: \"1c70564a-daef-4a04-82c5-b6efb98deaf7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.449680 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c70564a-daef-4a04-82c5-b6efb98deaf7-config-volume\") pod \"collect-profiles-29412735-98vzh\" (UID: \"1c70564a-daef-4a04-82c5-b6efb98deaf7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.449942 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c70564a-daef-4a04-82c5-b6efb98deaf7-secret-volume\") pod \"collect-profiles-29412735-98vzh\" (UID: \"1c70564a-daef-4a04-82c5-b6efb98deaf7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.450048 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8llp\" (UniqueName: \"kubernetes.io/projected/1c70564a-daef-4a04-82c5-b6efb98deaf7-kube-api-access-d8llp\") pod \"collect-profiles-29412735-98vzh\" (UID: \"1c70564a-daef-4a04-82c5-b6efb98deaf7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.450611 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c70564a-daef-4a04-82c5-b6efb98deaf7-config-volume\") pod \"collect-profiles-29412735-98vzh\" (UID: \"1c70564a-daef-4a04-82c5-b6efb98deaf7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.992782 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8llp\" (UniqueName: \"kubernetes.io/projected/1c70564a-daef-4a04-82c5-b6efb98deaf7-kube-api-access-d8llp\") pod \"collect-profiles-29412735-98vzh\" (UID: \"1c70564a-daef-4a04-82c5-b6efb98deaf7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" Dec 03 12:15:00 crc kubenswrapper[4756]: I1203 12:15:00.997230 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c70564a-daef-4a04-82c5-b6efb98deaf7-secret-volume\") pod \"collect-profiles-29412735-98vzh\" (UID: \"1c70564a-daef-4a04-82c5-b6efb98deaf7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" Dec 03 12:15:01 crc kubenswrapper[4756]: I1203 12:15:01.084876 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" Dec 03 12:15:01 crc kubenswrapper[4756]: I1203 12:15:01.528801 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh"] Dec 03 12:15:02 crc kubenswrapper[4756]: I1203 12:15:02.413615 4756 generic.go:334] "Generic (PLEG): container finished" podID="1c70564a-daef-4a04-82c5-b6efb98deaf7" containerID="5152feaf462f7144a08bec12bbcc97aa862c0411dfb27b2e6c2628e8634db2e5" exitCode=0 Dec 03 12:15:02 crc kubenswrapper[4756]: I1203 12:15:02.413735 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" event={"ID":"1c70564a-daef-4a04-82c5-b6efb98deaf7","Type":"ContainerDied","Data":"5152feaf462f7144a08bec12bbcc97aa862c0411dfb27b2e6c2628e8634db2e5"} Dec 03 12:15:02 crc kubenswrapper[4756]: I1203 12:15:02.414008 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" event={"ID":"1c70564a-daef-4a04-82c5-b6efb98deaf7","Type":"ContainerStarted","Data":"fec16b91482f679f36c59bcb584258386474ccb900866381b60499c042b0c443"} Dec 03 12:15:03 crc kubenswrapper[4756]: I1203 12:15:03.748672 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" Dec 03 12:15:03 crc kubenswrapper[4756]: I1203 12:15:03.916450 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c70564a-daef-4a04-82c5-b6efb98deaf7-config-volume\") pod \"1c70564a-daef-4a04-82c5-b6efb98deaf7\" (UID: \"1c70564a-daef-4a04-82c5-b6efb98deaf7\") " Dec 03 12:15:03 crc kubenswrapper[4756]: I1203 12:15:03.916597 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8llp\" (UniqueName: \"kubernetes.io/projected/1c70564a-daef-4a04-82c5-b6efb98deaf7-kube-api-access-d8llp\") pod \"1c70564a-daef-4a04-82c5-b6efb98deaf7\" (UID: \"1c70564a-daef-4a04-82c5-b6efb98deaf7\") " Dec 03 12:15:03 crc kubenswrapper[4756]: I1203 12:15:03.916697 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c70564a-daef-4a04-82c5-b6efb98deaf7-secret-volume\") pod \"1c70564a-daef-4a04-82c5-b6efb98deaf7\" (UID: \"1c70564a-daef-4a04-82c5-b6efb98deaf7\") " Dec 03 12:15:03 crc kubenswrapper[4756]: I1203 12:15:03.917545 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c70564a-daef-4a04-82c5-b6efb98deaf7-config-volume" (OuterVolumeSpecName: "config-volume") pod "1c70564a-daef-4a04-82c5-b6efb98deaf7" (UID: "1c70564a-daef-4a04-82c5-b6efb98deaf7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 12:15:03 crc kubenswrapper[4756]: I1203 12:15:03.923507 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c70564a-daef-4a04-82c5-b6efb98deaf7-kube-api-access-d8llp" (OuterVolumeSpecName: "kube-api-access-d8llp") pod "1c70564a-daef-4a04-82c5-b6efb98deaf7" (UID: "1c70564a-daef-4a04-82c5-b6efb98deaf7"). InnerVolumeSpecName "kube-api-access-d8llp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 12:15:03 crc kubenswrapper[4756]: I1203 12:15:03.923882 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c70564a-daef-4a04-82c5-b6efb98deaf7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1c70564a-daef-4a04-82c5-b6efb98deaf7" (UID: "1c70564a-daef-4a04-82c5-b6efb98deaf7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 12:15:04 crc kubenswrapper[4756]: I1203 12:15:04.018459 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c70564a-daef-4a04-82c5-b6efb98deaf7-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 12:15:04 crc kubenswrapper[4756]: I1203 12:15:04.018494 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c70564a-daef-4a04-82c5-b6efb98deaf7-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 12:15:04 crc kubenswrapper[4756]: I1203 12:15:04.018506 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8llp\" (UniqueName: \"kubernetes.io/projected/1c70564a-daef-4a04-82c5-b6efb98deaf7-kube-api-access-d8llp\") on node \"crc\" DevicePath \"\"" Dec 03 12:15:04 crc kubenswrapper[4756]: I1203 12:15:04.433174 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" event={"ID":"1c70564a-daef-4a04-82c5-b6efb98deaf7","Type":"ContainerDied","Data":"fec16b91482f679f36c59bcb584258386474ccb900866381b60499c042b0c443"} Dec 03 12:15:04 crc kubenswrapper[4756]: I1203 12:15:04.433229 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fec16b91482f679f36c59bcb584258386474ccb900866381b60499c042b0c443" Dec 03 12:15:04 crc kubenswrapper[4756]: I1203 12:15:04.433226 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412735-98vzh" Dec 03 12:15:04 crc kubenswrapper[4756]: I1203 12:15:04.830567 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw"] Dec 03 12:15:04 crc kubenswrapper[4756]: I1203 12:15:04.839158 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412690-hsksw"] Dec 03 12:15:05 crc kubenswrapper[4756]: I1203 12:15:05.253306 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb" path="/var/lib/kubelet/pods/3b2e2743-9acd-49cc-b3e8-691cf0d6dfcb/volumes" Dec 03 12:15:12 crc kubenswrapper[4756]: I1203 12:15:12.963009 4756 scope.go:117] "RemoveContainer" containerID="8be9885a9aea0c9db62824610dd8adc409f66b1a592cdf2f455f1f88acd669cc" Dec 03 12:15:22 crc kubenswrapper[4756]: I1203 12:15:22.607365 4756 patch_prober.go:28] interesting pod/machine-config-daemon-pppvw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 12:15:22 crc kubenswrapper[4756]: I1203 12:15:22.607829 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 12:15:22 crc kubenswrapper[4756]: I1203 12:15:22.607880 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" Dec 03 12:15:22 crc kubenswrapper[4756]: I1203 12:15:22.609463 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f851b82450910cfc2eca385f72d53bd053f763a888a9de5f29896c93a160e5f"} pod="openshift-machine-config-operator/machine-config-daemon-pppvw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 12:15:22 crc kubenswrapper[4756]: I1203 12:15:22.609537 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" podUID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerName="machine-config-daemon" containerID="cri-o://2f851b82450910cfc2eca385f72d53bd053f763a888a9de5f29896c93a160e5f" gracePeriod=600 Dec 03 12:15:23 crc kubenswrapper[4756]: I1203 12:15:23.666801 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4cc39f5-d4a1-4174-8d5f-56126872107f" containerID="2f851b82450910cfc2eca385f72d53bd053f763a888a9de5f29896c93a160e5f" exitCode=0 Dec 03 12:15:23 crc kubenswrapper[4756]: I1203 12:15:23.667388 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerDied","Data":"2f851b82450910cfc2eca385f72d53bd053f763a888a9de5f29896c93a160e5f"} Dec 03 12:15:23 crc kubenswrapper[4756]: I1203 12:15:23.667443 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pppvw" event={"ID":"f4cc39f5-d4a1-4174-8d5f-56126872107f","Type":"ContainerStarted","Data":"1c5e4ecbad00e6d0140350ca56fa642772ef101618a757aafdb0d30960460f3c"} Dec 03 12:15:23 crc kubenswrapper[4756]: I1203 12:15:23.667460 4756 scope.go:117] "RemoveContainer" containerID="694a8eb0b7b8e87183361622f8b36a1772a5b7b1003d9040f3c29f9acf25417f"